On August 18, [email protected] wrote:
> Jens Axboe <[email protected]> writes:
>
> > On Thu, Aug 17 2006, Andrew Morton wrote:
> > > It seems that the many-writers-to-different-disks workloads don't happen
> > > very often. We know this because
> > >
> > > a) The 2.4 performance is utterly awful, and I never saw anybody
> > > complain and
> >
> > Talk to some of the people that used DVD-RAM devices (or other
> > excruciatingly slow writers) on their system, and they would disagree
> > violently :-)
>
> I hit this recently while doing backups to a slow external USB disk.
> The system was quite unusable (some commands blocked for over a minute)
Ouch.
I suspect we are going to see more of this, as USB drive for backups
is probably a very attractive option for many.
The 'obvious' solution would be to count dirty pages per backing_dev
and rate limit writes based on this.
But counting pages can be expensive. I wonder if there might be some
way to throttle the required writes without doing too much counting.
Could we watch when the backing_dev is congested and use that?
e.g.
When Dirty+Writeback is between max_dirty/2 and max_dirty,
balance_dirty_pages waits until mapping->backing_dev_info
is not congested.
That might slow things down, but it is hard to know if it would slow
things down the right amount...
Given that large machines are likely to have lots of different
backing_devs, maybe counting all the dirty pages per backing_dev
wouldn't be too expensive?
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]