Re: -mm seems significanty slower than mainline on kernbench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nick Piggin wrote:
Peter Williams wrote:

Martin Bligh wrote:

Andy Whitcroft wrote:

Andy Whitcroft wrote:

Peter Williams wrote:



Attached is a new patch to fix the excessive idle problem. This patch
takes a new approach to the problem as it was becoming obvious that
trying to alter the load balancing code to cope with biased load was
harder than it seemed.





Ok.  Tried testing different-approach-to-smp-nice-problem against the
transition release 2.6.14-rc2-mm1 but it doesn't apply.  Am testing
against 2.6.15-mm3 right now.  Will let you know.





Doesn't appear to help if I am analysing the graphs right.  Martin?




Nope. still broken.



Interesting. The only real difference between this and Con's original patch is the stuff that he did in source_load() and target_load() to nobble the bias when nr_running is 1 or less. With this new model it should be possible to do something similar in those functions but I'll hold off doing anything until a comparison against 2.6.15-mm3 with the patch removed is available (as there are other scheduler changes in -mm3).


Ideally, balancing should be completely unaffected when all tasks are
of priority 0 which is what I thought yours did, and why I think the
current system is not great.

This latest change should make that happen as the weights for nice=0 tasks is unity.


I'll probably end up taking a look at it one day, if it doesn't get fixed.
I think your patch is pretty close

I thought that it was there :-).

I figured using the weights (which go away for nice=0 tasks) would make it behave nicely with the rest of the load balancing code.

but I didn't quite look close enough to
work out what's going wrong.

My testing (albeit on an old 2 cpu Celeron) showed no statistically significant difference between with the patch and without. If you ignored the standard deviations and statistical practice and just looked at the raw numbers you'd think it was better with the patch than without it. :-)

I assume that Andy Whitcroft is doing a kernbench with the patch removed from 2.6.15-mm3 (otherwise why would he ask for a patch to do that) and I'm waiting to see how that compares with the run he did with it in. There were other scheduling changes in 2.6.15-mm3 so I think this comparison is needed in order to be sure that any degradation is still due to my patch.

Peter
PS As load balancing maintainer, is the value 128 set in cement for SCHED_LOAD_SCALE? The reason I ask is that if it was changed to be a multiple of NICE_TO_BIAS_PRIO(0) (i.e. 20) my modification could be made slightly more efficient.
--
Peter Williams                                   [email protected]

"Learning, n. The kind of ignorance distinguishing the studious."
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux