Re: [PATCH 4 of 4] Introduce aio system call submission and completion system calls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Jan 31, 2007, at 9:21 AM, Andi Kleen wrote:

On Wednesday 31 January 2007 18:15, Zach Brown wrote:

On Jan 31, 2007, at 12:58 AM, Andi Kleen wrote:

Do you have any numbers how this compares cycle wise to just doing
clone+syscall+exit in user space?

Not yet, no.  Release early, release often, and all that.  I'll throw
something together.

So what was the motivation for doing this then?

Most fundamentally? Providing AIO system call functionality at a much lower maintenance cost. The hope is that the cost of adopting these fibril things will be lower than the cost of having to touch a code path that wants AIO support.

I simply don't believe that it's cheap to update code paths to support non-blocking state machines. As just one example of a looming cost, consider the retry-based buffered fs AIO patches that exist today. Their requirement to maintain these precisely balanced code paths that know to only return -EIOCBRETRY once they're at a point where retries won't access current-> seems.. unsustainable to me. This stems from the retries being handled off in the aio kernel threads which have their own task_struct. fs/aio.c goes to the trouble of migrating ->mm from the submitting task_struct, but nothing else. Continually adjusting this finely balanced relationship between paths that return -EIOCBRETY and the fields of task_struct that fs/aio.c knows to share with the submitting context seems unacceptably fragile.

Even with those buffered IO patches we still only get non-blocking behaviour at a few specific blocking points in the buffered IO path. It's nothing like the guarantee of non-blocking submission returns that the fibril-based submission guarantees.

  It's only point
is to have smaller startup costs for AIO than clone+fork without
fixing the VFS code to be a state machine, right?

Smaller startup costs and fewer behavioural differences. Did that message to Nick about ioprio and io_context resonate with you at all?

I'm personally unclear if it's really less work to teach a lot of
code in the kernel about a new thread abstraction than changing VFS.

Why are we limiting the scope of moving to a state machine just to the VFS? If you look no further than some hypothetical AIO iscsi/aoe/ nbd/whatever target you obviously include networking. Probably splice () if you're aggressive :).

Let's be clear. I would be thrilled if AIO was implemented by native non-blocking handler implementations. I don't think it will happen. Not because we don't think it sounds great on paper, but because it's a hugely complex endeavor that would take development and maintenance effort away from the task of keeping basic functionality working.

So the hope with fibrils is that we lower the barrier to getting AIO syscall support across the board at an acceptable cost.

It doesn't *stop* us from migrating very important paths (storage, networking) to wildly optimized AIO implementations. But it also doesn't force AIO support to wait for that.

Your patches don't look that complicated yet but you openly
admitted you waved away many of the more tricky issues (like
signals etc.) and I bet there are yet-unknown side effects
of this too that will need more changes.

To quibble, "waved away" implies that they've been dismissed. That's not right. It's a work in progress, so yes, there will be more fiddly details discovered and addressed over time. The hope is that when it's said and done it'll still be worth merging. If at some point it gets to be too much, well, at least we'll have this work to reference as a decisive attempt.

I'm not sure the fibrils thing will be that much faster than
a possibly somewhat fast pathed for this case clone+syscall+exit.

I'll try and get some numbers for you sooner rather than later.

Thanks for being diligent, this is exactly the kind of hard look I want this work to get.

- z
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux