On Fri, 18 May 2007 09:32:23 +0200 Nick Piggin <[email protected]> wrote:
> On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
> > On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin <[email protected]> wrote:
> >
> > > Many batch operations on struct page are completely random,
> >
> > But they shouldn't be: we should aim to place physically contiguous pages
> > into logically contiguous pagecache slots, for all the reasons we
> > discussed.
>
> For big IO batch operations, pagecache would be more likely to be
> physically contiguous, as would LRU, I suppose.
read(), write(), truncate(), writeback, pagefault. Pretty common stuff.
> I'm more thinking of operations where things get reclaimed over time,
> touched or dirtied in slightly different orderings, interleaved with
> other allocations, etc.
Yes, that can happen. But in such cases we by definition aren't touching
the pageframes very often. I'd assert that when the kernel is really
hitting those pageframes hard, it is commonly doing this in ascending
pagecache order.
>
> > If/when that happens, there will be a *lot* of locality of reference
> > against the pageframes in a lot of important codepaths.
>
> And when it doesn't happen, we eat 75% more cache misses. And for that
> matter we eat 75% more cache misses for non-batch operations like
> allocating or freeing a page by slab, for example.
"measure twice, cut once"
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]