> > 1) File backed pages -> file
> >
> > dirty + writeback count remains constant
> >
> > 2) Anonymous pages -> swap
> >
> > writeback count increases, dirty balancing will hold back file
> > writeback in favor of swap
> >
> > So the real question is: does case 2 need rate limiting, or is it OK
> > to let the device queue fill with swap pages as fast as possible?
>
> > Because balance_dirty_pages() maintains:
>
> nr_dirty + nr_unstable + nr_writeback <
> total_dirty + nr_cpus * ratelimit_pages
>
> throttle_vm_writeout() _should_ not deadlock on that, unless you're
> caught in the error term: nr_cpus * ratelimit_pages.
And it does get caught on that in small memory machines. This
deadlock is easily reproducable on a 32MB UML instance. I haven't yet
tested with the per-bdi patches, but I don't think they make a
difference in this case.
> Which can only happen when it is larger than 10% of dirty_thresh.
>
> Which is even more unlikely since it doesn't account nr_dirty (as I
> think it should).
I think nr_dirty is totally irrelevant. Since we don't care about
case 1), and in case 2) nr_dirty doesn't play any role.
> As for 2), yes I think having a limit on the total number of pages in
> flight is a good thing.
Why?
> But that said, there might be better ways to do that.
Sure, if we do need to globally limit the number of under-writeback
pages, then I think we need to do it independently of the dirty
accounting.
Miklos
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]