Re: Merging relayfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 13 Jul 2005, Vara Prasad wrote:
[..]
O.K, looks like you are agreeing that we need a buffering mechanism in the kernel to implement speculative tracing, right.

Each agregator have own data. This data are buffered ..
In this sense: yes, it infrastructure for allocate, deallocate, copy ..
(generaly) operate on this buffers is needed.

Once we have the buffering mechanism we need to create an efficient API for producers of the data to write to that buffering scheme. To my knowledge there is no such generic buffering mechanism already in the kernel, Relayfs implements that buffering scheme and an efficient API to write to it. Isn't that a good reason to have Relayfs merged?

Sorry but not. Relayfs this is much more than it is required for simple manage buffers (better will be say in this point "probes data containers"). All this kind operation can be performed using reference/index.

Once the data in the buffer is decided to be committed you need a mechanism to get that data from the kernel to userspace. If you don't like Relayfs transfer mechanism, what do you suggest using?

Correct me if I'm wrong .. ant try fill all this area where you see my
worse knowledge then yours or other strict kernel developers.

1) relayfs was prepared for low latency on move data outside kernel space,
2) getting data from probes do not require organize all them in regular
   file system structure also in most cases will do not require low latency.
   Only in all cases where buffer must be neccessarly moved outside kernel
   space will require minimal overhead.

Many other kernel sugbsystem allow transfer data as result of simple request with argument as reference/index. Organize all data stored/used by probes in named structure (if it is *realy* neccesary) can be IMO moved outside kernel space. Why ? becase *all operations on kernel side on this data* seems can be performed without addidional namig abstraction (buffer number, buffer size and data type stored in buffer it will be all what is neccessary in probably all cases even in case operate on complex data).

If you realy want get data from probes via fopen()/read() why not map "probes data containers" to procfs/sysfs ? For reciving signals from perobes for move out of kernel space mapped buffer content and/or ALSO reciving signals with DATA (on request from user space) probably can be performed via existing netlink infrastrucrure or (higher) event notification.

(?)

Allow me ask you: do you try test is using netlink will allow perform
operations in neccessary time frame ? (with additional assumption agregate
maximum data possibly in "short range" from probe) .. probably not because most of skeleton ussages of KProbes and also LTT interface was prepared with assumption agregate data outside kernel space.
Do you see this ?

This was and sill is core cause of LTT problems and why it will never will be so usefull as DTrace. Agregate data in possible "short distance" from probe is *core DTrace assumption*. Simple .. this why using DTrace is *very light* even if you are enable/hang thousands of probes inside kernel space and still it allwo use this kind of technik evel in very fragile (from point of view stabilyty) or under very high presure systems.

kloczek
--
-----------------------------------------------------------
*Ludzie nie mają problemów, tylko sobie sami je stwarzają*
-----------------------------------------------------------
Tomasz Kłoczko, sys adm @zie.pg.gda.pl|*e-mail: [email protected]*

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux