Re: lmbench ctxsw regression with CFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 02, 2007 at 05:44:47PM +0200, Ingo Molnar wrote:
> 
> * Nick Piggin <[email protected]> wrote:
> 
> > > > > One thing to check out is whether the lmbench numbers are 
> > > > > "correct". Especially on SMP systems, the lmbench numbers are 
> > > > > actually *best* when the two processes run on the same CPU, even 
> > > > > though that's not really at all the best scheduling - it's just 
> > > > > that it artificially improves lmbench numbers because of the 
> > > > > close cache affinity for the pipe data structures.
> > > > 
> > > > Yes, I bound them to a single core.
> > > 
> > > could you send me the .config you used?
> > 
> > Sure, attached...
> > 
> > You don't see a regression? If not, then can you send me the .config 
> > you used? [...]
> 
> i used your config to get a few numbers and to see what happens. Here's 
> the numbers of 10 consecutive "lat_ctx -s 0 2" runs:
> 
>                         [ time in micro-seconds, smaller is better ]
> 
>         v2.6.22         v2.6.23-git          v2.6.23-git+const-param
>         -------         -----------          -----------------------
>          1.30              1.60                       1.19
>          1.30              1.36                       1.18
>          1.14              1.50                       1.01
>          1.26              1.27                       1.23
>          1.22              1.40                       1.04
>          1.13              1.34                       1.09
>          1.27              1.39                       1.05
>          1.20              1.30                       1.16
>          1.20              1.17                       1.16
>          1.25              1.33                       1.01
>        -------------------------------------------------------------
>   avg:   1.22              1.36 (+11.3%)              1.11 (-10.3%)
>   min:   1.13              1.17 ( +3.5%)              1.01 (-11.8%)
>   max:   1.27              1.60 (+26.0%)              1.23 ( -3.2%)
> 
> one reason for the extra overhead is the current tunability of CFS, but 
> that is not fundamental, it's caused by the many knobs that CFS has at 
> the moment. The const-tuning patch (attached below, results in the 
> rightmost column) changes those knobs to constants, allowing the 
> compiler to optimize the math better and reduce code size. (the code 
> movement in the patch makes up for most of its size, the change that it 
> does is simple otherwise.)

[...]

Oh good. Thanks for getting to the bottom of it. We have normally
disliked too much runtime tunables in the scheduler, so I assume
these are mostly going away or under a CONFIG option for 2.6.23?
Or...?

What CPU did you get these numbers on? Do the indirect calls hurt
much on those without an indirect predictor? (I'll try running some
tests).

I must say that I don't really like the indirect calls a great deal,
and they could be eliminated just with a couple of branches and
direct calls.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux