Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/18/2007 06:54 PM, Matt Mackall wrote:

You can expect the distribution of file sizes to follow a gamma
distribution, with a large hump towards the small end of the spectrum
around 1-10K, dropping off very rapidly as file sizes grow.

Okay.

Not too sure then that 8K wouldn't be something I'd want, given fewer pagefaults and all that...

Fewer minor pagefaults, perhaps. Readahead already deals with most of
the major pagefaults that larger pages would.

Mmm, yes.

Anyway, raising the systemwide memory overhead by up to 15% seems an awfully silly way to address the problem of not being able to allocate a
stack when you're down to your last 1 or 2% of memory!

Well, I've seen larger pagesizes submerge in more situations, specifically in allocation overhead -- ie, making the struct page's fit in lowmem for hugemem x86 boxes was the first I heard of it. But yes, otherwise (also) mostly database loads which obviously have moved to 64-bit since.

Pagecache tail-packing seems like a promising idea to deal with the downside of larger pages but I'll admit I'm not particularly sure how many _up_ sides to them are left on x86 (not -64) now that's becoming a legacy architecture (and since you just shot down the pagefaults thing).

Rene.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux