On Sun, Jul 22 2007, Peter Zijlstra wrote:
> On Sun, 2007-07-22 at 10:50 +0200, Jens Axboe wrote:
> > On Sun, Jul 22 2007, Peter Zijlstra wrote:
> > > On Sun, 2007-07-22 at 10:24 +0200, Jens Axboe wrote:
> > > > On Sat, Jul 21 2007, Peter Zijlstra wrote:
> > >
> > > > > +static __init int readahead_init(void)
> > > > > +{
> > > > > + /*
> > > > > + * Scale the max readahead window with system memory
> > > > > + *
> > > > > + * 64M: 128K
> > > > > + * 128M: 180K
> > > > > + * 256M: 256K
> > > > > + * 512M: 360K
> > > > > + * 1G: 512K
> > > > > + * 2G: 724K
> > > > > + * 4G: 1024K
> > > > > + * 8G: 1448K
> > > > > + * 16G: 2048K
> > > > > + */
> > > > > + ra_pages = int_sqrt(totalram_pages/16);
> > > > > + if (ra_pages > (2 << (20 - PAGE_SHIFT)))
> > > > > + ra_pages = 2 << (20 - PAGE_SHIFT);
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > >
> > > > How did you come up with these numbers?
> > >
> > > Well, most other places in the kernel where we scale by memory size we
> > > use the a sqrt curve, and the specific scale was the result of some
> > > fiddling, these numbers looked sane to me, nothing special.
> > >
> > > Would you suggest a different set, and if so, do you have any rationale
> > > for them?
> >
> > I just wish you had a rationale behind them, I don't think it's that
> > great of a series.
>
> Well, I was quite ignorant of the issues you just pointed out. Thanks
> those do indeed provide basis for a more solid set.
>
> > I agree with the low point of 128k.
>
> Perhaps that should be enforced then, because currently a system with
> <64M will get less.
I think it should remain the low point.
> > Then it'd be sane
> > to try and determine what the upper limit of ra window size goodness is,
> > which is probably impossible since it depends on the hardware a lot. But
> > lets just say the upper value is 2mb, then I think it's pretty silly
> > _not_ to use 2mb on a 1g machine for instance. So more aggressive
> > scaling.
>
> Right, I was being a little conservative here.
>
> > Then there's the relationship between nr of requests and ra size. When
> > you leave everything up to a simple sqrt of total_ram type thing, then
> > you are sure to hit stupid values that cause a queue size of a number of
> > full requests, plus a small one at the end. Clearly not optimal!
>
> And this is where Wu's point of power of two series comes into play,
> right?
>
> So something like:
>
> roundup_pow_of_two(int_sqrt((totalram_pages << (PAGE_SHIFT-10))))
>
>
> memory in MB RA window in KB
> 64 128
> 128 256
> 256 256
> 512 512
> 1024 512
> 2048 1024
> 4096 1024
> 8192 2048
> 16384 2048
> 32768 4096
> 65536 4096
Only if you assume that max request size is always a power of 2. That's
usually true, but there are cases where it's 124kb for instance.
And there's still an issue when max_sectors isn't the deciding factor,
if we end up having to stop merging on a request because we hit other
limitations.
So there's definitely room for improvement! Even today, btw, it's not
all because of these changes.
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]