On Fri, 8 Sep 2006 13:27:18 +0100
Andy Whitcroft <[email protected]> wrote:
> When we are out of memory of a suitable size we enter reclaim.
> The current reclaim algorithm targets pages in LRU order, which
> is great for fairness but highly unsuitable if you desire pages at
> higher orders. To get pages of higher order we must shoot down a
> very high proportion of memory; >95% in a lot of cases.
>
> This patch introduces an alternative algorithm used when requesting
> higher order allocations. Here we look at memory in ranges at the
> order requested. We make a quick pass to see if all pages in that
> area are likely to be reclaimed, only then do we apply reclaim to
> the pages in the area.
>
> Testing in combination with fragmentation avoidance shows
> significantly improved chances of a successful allocation at
> higher order.
I bet it does.
I'm somewhat surprised at the implementation. Would it not be sufficient
to do this within shrink_inactive_list()? Something along the lines of:
- Pick tail page off LRU.
- For all "neighbour" pages (alignment == 1<<order, count == 1<<order)
- If they're all PageLRU and !PageActive, add them all to page_list for
possible reclaim
And, in shrink_active_list:
- Pick tail page off LRU
- For all "neighbour" pages (alignment == 1<<order, count == 1<<order)
If they're all PageLRU, put all the active pages in this block onto
l_hold for possible deactivation.
Maybe all that can be done in isolate_lru_pages().
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]