Mathieu Desnoyers wrote:
> Trace event headers are very similar between both LTT and LKET which is
> good in other to get some synergy between our projects. One thing that
> LKET has on each trace event that LTT doesn't is the tid and CPU id of
> each event. We find this extremely useful for post-processing. Also,
> why have the event_size on every event taken? Why not describe the
> event during the trace header and remove this redundant information from
> the event header and save some trace file space.
>
A standard event header has to have only crucial information, nothing more, or
it becomes bloated and quickly grow trace size. We decided not to put tid and
CPU id in the event header because tid is already available with the schedchange
events at post-processing time and CPU id is already available too, as we have
per CPU buffers.
We still keep the CPU id because LKET still support ASCII tracing which
mixes the output of all the CPUs together. It is still debatable
whether this is a useful feature or not though. If we remove ASCII
event tracing from LKET, we could remove CPU id from the event header as
well.
The tid we still include because LKET supports turning on individual
tracepoints unlike LTT, which if I remember correctly turns on all the
tracepoint that are compiled into the running kernel. Since the user is
free to chose which tracepoints he wants to use for his workload, we can
not guarantee that scheduler tracepoints are going to be available. We
consider taking the tid as one of those absolute minimum pieces of data
required to do meaningful analysis.
We chose to control performance and trace output size by letting users
have control of number of tracepoint he can activate at any given time.
This is important to us since we plan to add many dynamic tracepoints to
different sub-systems (filesystem, device drivers, core kernel
facilities, etc...). Turning on all of these tracepoint at the same
time would slow down the system to much and change the performance
characteristics of the environment being studied.
The event size is completely unnecessary, but in reality very, very useful to
authenticate the correspondance between the size of the data recorded by the
kernel and the size of data the viewer thinks it is reading. Think of it as a
consistency check between kernel and viewer algorithms.
I understand. But if the size of each event is fixed, why would you
expect the data sizes that the tool reports in the trace header for each
event to change over the course of a trace. If the data on the per-CPU
buffers is serialized, a similar authentication could be done using the
timestamp by checking the timestamps of the events before and after the
current event, thus validating the current timestamp as well as the size
offset of the previous event. Just a thought.
-JRS
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]