On Wed, Jun 22 2005, [email protected] wrote:
> --- Jens Axboe <[email protected]> wrote:
> > THe problem here is that cfq (and the other io schedulers) still
> > consider the io async even if fsync() ends up waiting for it to
> > complete. So there's no real QOS being applied to these pending writes,
> > and I don't immediately see how we can improve that situation right now.
> <I might sound stupid>
> I still don't understand why async requests are in a different queue than the
> sync ones?
> Wouldn't it be simpler to consider all the IO the same, and like you pointed
> out, consider synced IO to be equivalent to async + some sync (as in wait for
> completion) call (fsync goes a little too far).
> </I might sound stupid>
First, lets cover a little terminology. All io is really async in Linux,
the block io model is inherently async in nature. So sync io is really
just async io that is being waited on immediately. When I talk about
sync and async io in the context of the io scheduler, the sync io refers
to io that is wanted right away. That would be reads or direct writes.
The async io is something that we can complete at will, where latency
typically doesn't matter. That would be normal dirtying of data that
needs to be flushed to disk.
Another property of sync io in the io scheduler is that it usually
implies that another sync io request will follow immediately (well,
almost) after one has completed. So there's a depedency relation between
sync requests, that async requests don't share.
So there are different requirements for sync and async io. The io
scheduler tries to minimize latencies for async requests somewhat,
mainly just by making sure that it isn't starved for too long. However,
when you do an fsync, you want to complete lots of writes, but the io
scheduler doesn't get this info passed down. If you keep flooding the
queue with new writes, this could take quite a while to finish. We could
improve this situation by only flushing out the needed data, or just a
simple hack to onlu flush out already queued io (provided the fsync()
already made sure that the correct data is already queued).
I will try and play a little with this, it's definitely something that
would be interesting and worthwhile to improve.
> > What file system are you using? I ran your test on ext2, and it didn't
> > give me more than ~2 seconds latency for the fsync. Tried reiserfs now,
> > and it's in the 23-24 range.
> >
> I am using ext3 on Fedora Core 3.
Journalled file systems will behave worse for this, because it has to
tend to the journal as well. Can you try mounting that partition as ext2
and see what numbers that gives you?
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]