Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 18, 2007 at 04:38:19AM +0200, Rene Herman wrote:
> On 07/17/2007 01:27 AM, Matt Mackall wrote:
> 
> >Larger soft pages waste tremendous amounts of memory (mostly in page
> >cache) for minimal benefit on, say, the typical desktop. While there
> >are workloads where it's a win, it's probably on a small percentage of
> >machines.
> >
> >So it's absolutely no help in fixing our order-1 allocation problem
> >because we don't want to force large pages on people.
> 
> I was just now looking at how much space is in fact wasted in pagecache for 
> various pagesizes by running the attached dumb little program from a few 
> selected directories (heavy stack recursion, never mind).
> 
> Well, hmmm. This is on a (compiled) git tree:
> 
> rene@7ixe4:~/src/linux/local$ pageslack
> total	: 447350347
>  4k	: 67738037 (15%)
>  8k	: 147814837 (33%)
> 16k	: 324614581 (72%)
> 32k	: 724629941 (161%)
> 64k	: 1592785333 (356%)
> 
> Nicely constant factor 2.2 instead of the 2 one would expect but oh well. 
> On a collection of larger files the percentages obviously drop. This is on 
> a directory of ogg vorbis files:
> 
> root@7ixe4:/mnt/ogg/.../... # pageslack
> total	: 70817974
>  4k	: 26442 (0%)
>  8k	: 67402 (0%)
> 16k	: 124746 (0%)
> 32k	: 288586 (0%)
> 64k	: 419658 (0%)
> 
> The "typical desktop" is presented by neither I guess but does involve 
> audio and (much larger still) video and bloody huge browser apps.

I'd be surprised if a user had substantially more than one OGG, video,
or browser in memory at one time. In fact, you're likely to find only
a fraction of each of those in memory at any given time.

Meanwhile, they're likely to have thousands of small browser cache,
thumbnail, config, icon, maildir, etc. files in cache. And hundreds of
medium-sized libraries, utilities, applications, and so on.

You can expect the distribution of file sizes to follow a gamma
distribution, with a large hump towards the small end of the spectrum
around 1-10K, dropping off very rapidly as file sizes grow.

> Not too sure then that 8K wouldn't be something I'd want, given fewer 
> pagefaults and all that...

Fewer minor pagefaults, perhaps. Readahead already deals with most of
the major pagefaults that larger pages would.

Anyway, raising the systemwide memory overhead by up to 15% seems an
awfully silly way to address the problem of not being able to allocate
a stack when you're down to your last 1 or 2% of memory! In all
likelihood, we'll fail sooner because we're completely OOM.

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux