Re: [PATCH 0/11] LTTng-core (basic tracing infrastructure) 0.5.108

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jose,

* Jose R. Santos ([email protected]) wrote:
> >My goal with LTTng is to provide a reentrant data serialisation mechanism 
> >that
> >can be called from anywhere in the kernel (ok, the vmalloc path of the page
> >fault handler is _the_ exception) that does not use any lock and can 
> >therefore
> >trace code paths like NMI handlers.
> >  
> 
> One of the things that I've notice from this thread that neither you or 
> Karim sees to have answer is why is LTTng needed if a suitable 
> replacement can be developed using SystemTap with static markers.  I am 
> personally interested in this answer as well.  If all the things that 
> LTT is proposing can be implemented in SystemTap, what then is the 
> advantage of accenting such an interface into the kernel.
> 

Well, last time I have checked, SystemTAP did not have a reentrant serialisation
mechanism to write the information to the buffers. Also, the goals of the
projects differ : SystemTAP finds acceptable to suffer from the kprobe
performance hit while it is unacceptable for LTTng.

> I don't really care which method is used as long as its the right tool 
> for the job.  I see several idea from LTT that could be integrated into 
> SystemTap in order to make it a one stop solution for both dynamic and 
> static tracing.  Would you care to elaborate why you think having 
> separate projects is a better solution?

I think that each projet focus on their own different goals but that there is
much to gain in reusing the strenghts of each.

SystemTAP is good at dynamic instrumentation.
LTTng is good at data serialisation under a fully reentrant kernel.
LTTng provides logging primitives for any data type, including SystemTAP text
output.

Is someone willing to try to create a small facility that will dump SystemTAP's
output in LTTng ? It is nearly trivial : if I wasn't completing my debugfs port,
I would probably be doing it right now.

> Trace event headers are very similar between both LTT and LKET which is 
> good in other to get some synergy between our projects.  One thing that 
> LKET has on each trace event that LTT doesn't is the tid and CPU id of 
> each event.  We find this extremely useful for post-processing.  Also, 
> why have the event_size on every event taken?  Why not describe the 
> event during the trace header and remove this redundant information from 
> the event header and save some trace file space.
> 

A standard event header has to have only crucial information, nothing more, or
it becomes bloated and quickly grow trace size. We decided not to put tid and
CPU id in the event header because tid is already available with the schedchange
events at post-processing time and CPU id is already available too, as we have
per CPU buffers.

The event size is completely unnecessary, but in reality very, very useful to
authenticate the correspondance between the size of the data recorded by the
kernel and the size of data the viewer thinks it is reading. Think of it as a
consistency check between kernel and viewer algorithms.


Mathieu


OpenPGP public key:              http://krystal.dyndns.org:8080/key/compudj.gpg
Key fingerprint:     8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux