Exporting a lot of data to other processes?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everybody,

I've already pondered about a question for some time, and would like to ask 
for a better idea here. It's not entirely about the kernel - although that 
surely has some impact, too.


I've got some process/daemon, that wants to export information to other 
processes. As model for exporting that data I found the sysfs and procfs 
nice - an easy "cat" can give you the needed data.

Now, in order to do something like that from userspace, I either have to:

-) use FUSE
   - feels slow (many context switches)
   - much overhead for such common things (another daemon)
-) use named pipes in some directory structure, and keep them open in
   the daemon - waiting to be written to
   - many open filehandles
   - feels not really useable for bigger (>1000) structures
-) use some ramfs/shmfs or similar, and overwrite the data occasionally
   - not current data
   - runtime overhead (processor load)

Now the open/close events wouldn't be interesting; the read() and (possibly) 
write events would have to be relayed (which is not the case for FUSE, IIUC)

Is there some better way? For small structures the pipes seem to be the best 
way ... just wait for a reader, give it data, and finished.


Thank you for all ideas.


Regards,

Phil
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux