On Tue, 14 Aug 2007, Peter Zijlstra wrote:
> > Ok but that could be addressed by making sure that a certain portion of
> > memory is reserved for clean file backed pages.
>
> Which gets us back to the initial problem of sizing this portion and
> ensuring it is big enough to service the need.
Clean file backed pages dominate memory on most boxes. They can be
calculated by NR_FILE_PAGES - NR_FILE_DIRTY
On my 2G system that is
Cached: 1731480 kB
Dirty: 424 kB
So for most load the patch as is will fix your issues. The problem arises
if you have extreme loads that are making the majority of pages anonymous.
We could change min_free_kbytes to specify the number of free + clean
pages required (if we can do atomic reclaim then we do not need it
anymore). Then we can specify a large portion of memory for
min_free_kbytes. 20%? That would give you 400M on my box which would
certainly suffice.
If the amount of clean file backed pages falls below that limit then do
the usual reclaim. If we write anonymous pages out to swap then they
can also become clean and reclaimable.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]