RE: x86, ptrace: support for branch trace store(BTS)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>-----Original Message-----
>From: Ingo Molnar [mailto:[email protected]] 
>Sent: Donnerstag, 13. Dezember 2007 11:30

>> Users who want to process that huge amount of data would be 
>better off 
>> using a file-based approach (well, if it cannot be held in physical 
>> memory, they will spend most of their time swapping, anyway). Those 
>> users would typically wait for the 'buffer full' event and drain the 
>> buffer into a file - whether this is the real buffer or a bigger 
>> virtual buffer.
>> 
>> The two-buffer approach would only benefit users who want to 
>hold the 
>> full profile in memory - or who want to stall the debuggee 
>until they 
>> processed or somehow compressed the data collected so far. Those 
>> approaches would not scale for very big profiles. The small profile 
>> cases would already be covered with a reasonably big real buffer.
>
>well, the two-buffer approach would just be a general API with no 
>limitations. It would make the internal buffer mostly a pure 
>performance 
>detail.

Agreed.


Somewhat.

A user-provided second buffer would need to be up-to-date when we switch
to the user's task.
We would either need to drain the real buffer when switching from the
traced task;
or we would need to drain the real buffers of all traced tasks when
switching to the tracing task.
Both would require a get_user_pages() during context switching.

Alternatively, we could schedule a kernel task to drain the real buffer
when switching from a traced task.
The tracing task would then need to wait for all those kernel tasks. I'm
not sure how that affects scheduling fairness. And it's getting quite
complicated.


A kernel-provided second buffer could be entirely hidden behind the
ptrace (or, rather, ds) interface. It would not even have to be drained
before switching to the tracing task, since ds would just look into the
real buffer and then move on to the second buffer - transparent to the
user. Its size could be deducted from the user's memory limit and it
could be in pageable memory.

We would not be able to give precise overflow signals, that way (the
not-yet-drained real buffer might actually cause an overflow of the
second buffer, when drained). By allowing the user to query for the
number of BTS records to drain, we would not need to. A user drain would
drain both bufers.

The second buffer would be a pure performance/convenience detail of ds,
just like you suggested.


The ptrace API would allow the user to:
- define (and query) the overflow mechanism 
  (wrap-around or event)
- define (and query) the size of the buffer within certain limits
  (we could either give an error or cut off)
- define (and query) events to be monitored
  (last branch trace, scheduling timestamps)
- get a single BTS record
- query the number of BTS records
  (to find out how big your drain buffer needs to be; it may be bigger
than you requested)
- drain all BTS records (copy, then clear)
- clear all BTS records

Draining would require the user to allocate a buffer to hold the data,
which might not be feasible when he is near his memory limit. He could
fall back to looping over the single-entry get. It is questionable, how
useful the drain ptrace command would actually be; we might want to
replace it with a get range command.


Are you OK with this?


thanks and regards,
markus.
---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux