Re: [take14 0/3] kevent: Generic event handling mechanism.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2006-08-27 at 14:03 -0700, Ulrich Drepper wrote:

[ note: there was lots of good stuff that I cut out because it was a
long email and I'm only replying to some of its points ]

> Events to wait for are basically all those with syscalls which can
> potentially block indefinitely:
> 
> - file descriptor
> - POSIX message queues (these are in fact file descriptors but
>   let's make it legitimate)
> - timer expiration
> - signals (just as sigwait, not normal delivery instead of a handler)

For some of them (like SIGTERM), delivery to a kevent queue would
actually make sense.

> The ring buffer interface is not described in Nicholas' description.

I wasn't even aware there was a ring-buffer interface in the proposed
patches. Another reason why the onus of documenting a patch is on the
originator: the random nobody who ends up doing the documenting may
screw it up.

> Which brings me to the second point about the current kevent_get_events
> syscall.  I don't think the min_nr parameter is useful.  Probably we
> should not even allow the kevent queue to be used with different max_nr
> parameters in different thread.  If you'd allow this, how would the
> event notification be handled?  A waiter with a smaller required number
> of events would always be woken first.  I think the number of required
> events should be a property of the kevent object.  Then the code would
> create different kevent object if the requirement is different.  At the
> very least I'd declare it an error if at any time there are two or more
> threads delayed which have different requirements on the number of
> events.  This could provide all the flexibility needed while preventing
> some of the mistakes one can make.

I was thinking about this, and it's even worse in the case where a
kevent fd is shared by different processes (either by forking or by
passing it via PF_UNIX sockets).

What happens when you queue an AIO completion to a shared kevent queue?
(The AIO read only happened in one address space, or did it? What if the
read was to a shared memory region? What if the memory region is shared,
but mapped at different addresses? What if not all of the processes
involved have that AIO fd open?)

Also complicated is the case where waiting threads have different
priorities, different timeouts, and different minimum event counts --
how do you decide which thread gets events first? What if the decisions
are different depending on whether you want to maximize throughput or
interactivity?

-- 
Nicholas Miell <[email protected]>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux