Re: [patch 14/22] pollfs: pollable futex

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/3/07, Davide Libenzi <[email protected]> wrote:
Why is that futexes *must* be part of the "whole solution"? Ppl needs
solutions to specific problems, not an bloated interface that, like a
giant blob, includes everything just because it exists.

Sync objects are essential parts of many programs today and most
programs tomorrow.  Currently you cannot efficiently implement working
on multiple independent areas which are protected through some sync
object (mutex, condvar, ...).  You have to create a separate thread
for each.  Looping with the nonlocking mutex, for instance, is no
possibility.  This is solved by being able to get events for the
availability of the sync object.

And before you start and claim that this is no common cases take a
look at the waitformultipleobjects (with studdly caps somewhere) for
windows' API.  The actual interface is horrible, but the concept is
sound (it comes from VMS).  This is the basis of many programs on that
platform.  Basically, the central loop contains such a call.
Currently programs would have to be completely redesigned when ported
to Linux if they use any object which cannot be waited on.

There is much more.  As I tried to point out in last year's OLS paper,
central loops around such a call are the perfect scalability mechanism
and this is what is needed for the processors from today and tomorrow.


Before you try to bash a solution becuase it's costly, then you bounce
back from another angle, and say that a solution (pipes) that uses 2
descriptors, one file, one inode, one dentry and 4KB of kernel memory for
each instance, is a perfectly legal solution.

Stop.  I call the proposed code costly in terms of the code added to
the kernel which must be maintained and kept in mind when writing the
real next-gen event mechanism.  Not having this code in the kernel
certainly would make a difference.


Fast, I think we have that pretty much covered with Ingo poiting out a few
flaws in the numbers posted previously. Nice, I'll leave that out.

You again miss the context.  I was talking about the pipe-based
solution using a signal handler.


Epoll scales and already covers a large amount of things you may be
interested in receiving events from. Basically everything that have a
working f_op->poll.

epoll doesn't scale if every thread needs its own epoll set.  Beside
the overhead this also has huge program design problems: how do you
atomically remove a file descriptor from a collection of epoll sets?


The other big piece is AIO. Now you can have *another* layer on top of
AIO, that is included in your blob interface, but why?

I don't know how you arrive at AIO now.  kevent itself is independent
of the AIO code which was done at the same time by the same person.
It was just one kernel service which uses the event functionality.
The two must be judged independently.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux