On Thu, Nov 02, 2006 at 09:18:55AM +0100, Eric Dumazet ([email protected]) wrote:
> Evgeniy Polyakov a écrit :
> >pipes will work with kevent's poll mechanisms only, so there will not be
> >any performance gain at all since it is essentially the same as epoll
> >design with waiting and rescheduling (all my measurements with
> >epoll vs. kevent_poll always showed the same rates), pipes require the same
> >notifications as sockets for maximum perfomance.
> >I've put it into todo list.
>
> Evgeniy I think this part is *important*. I think most readers of lkml are
> not aware of exact mechanisms used in epoll, kevent poll, and 'kevent'
>
> I dont understand why epoll is bad for you, since for me,
> ep_poll_callback() is fast enough, even if we can make it touch less cache
> lines if reoredering 'struct epitem' correctly. My epoll_pipe_bench doesnt
> change the rescheduling rate of the test machine.
>
> Could you in your home page add some doc that clearly show the path taken
> for those 3 mechanisms and different events sources (At least sockets)
It is.
"It [kevent] supports socket notifications (accept, sending and receiving),
network AIO (aio_send(), aio_recv() and aio_sendfile()), inode
notifications (create/remove), generic poll()/select() notifications and
timer notifications."
In each patch I give a short description and socket notification patch
By poll design we have to setup following data:
poll_table_struct, which contains a callback
that callback will be called in each
sys_poll()->drivers_poll()->poll_wait(),
callback will allocate new private structure, which must have
wait_queue_t (it's callback will be invoked each time wake_up() is
called for given wait_queue_head), which should be linked to the given
wait_queue_head.
Kevent has different approach: so called origins (files, inodes,
sockets and so on) have a queues of userspace requests, for example
socket origin can only have a queue which will contain one of the
following events ($type.$event): socket.send, socket.recv,
socket.accept. So when new data has arrived, appropriate event is marked
as ready and moved into ready queue (very short operations) and
requested thread is awakened, which can then get ready events and
requeue them back (or remove, depending on flags). There are no
allocations in kevent_get_events() (epoll_wait() does not have it too),
no potentially long lists of wait_queue linked to the same
wait_queue_head_t, which is traversed each time we call wake_up(),
it has much smaller memory footprint compared to epoll (there is only
one kevent compared to epitem and eppoll_entry).
> Eric
--
Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]