Re: -mm seems significanty slower than mainline on kernbench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





But I was thinking more about the code that (in the original) handled the case where the number of tasks to be moved was less than 1 but more than 0 (i.e. the cases where "imbalance" would have been reduced to zero when divided by SCHED_LOAD_SCALE). I think that I got that part wrong and you can end up with a bias load to be moved which is less than any of the bias_prio values for any queued tasks (in circumstances where the original code would have rounded up to 1 and caused a move). I think that the way to handle this problem is to replace 1 with "average bias prio" within that logic. This would guarantee at least one task with a bias_prio small enough to be moved.

I think that this analysis is a strong argument for my original patch being the cause of the problem so I'll go ahead and generate a fix. I'll try to have a patch available later this morning.


Attached is a patch that addresses this problem. Unlike the description above it does not use "average bias prio" as that solution would be very complicated. Instead it makes the assumption that NICE_TO_BIAS_PRIO(0) is a "good enough" for this purpose as this is highly likely to be the median bias prio and the median is probably better for this purpose than the average.

Signed-off-by: Peter Williams <[email protected]>

Doesn't fix the perf issue.

M.


Peter


------------------------------------------------------------------------

Index: MM-2.6.X/kernel/sched.c
===================================================================
--- MM-2.6.X.orig/kernel/sched.c	2006-01-12 09:23:38.000000000 +1100
+++ MM-2.6.X/kernel/sched.c	2006-01-12 10:44:50.000000000 +1100
@@ -2116,11 +2116,11 @@ find_busiest_group(struct sched_domain *
 				(avg_load - this_load) * this->cpu_power)
 			/ SCHED_LOAD_SCALE;
- if (*imbalance < SCHED_LOAD_SCALE) {
+	if (*imbalance < NICE_TO_BIAS_PRIO(0) * SCHED_LOAD_SCALE) {
 		unsigned long pwr_now = 0, pwr_move = 0;
 		unsigned long tmp;
- if (max_load - this_load >= SCHED_LOAD_SCALE*2) {
+		if (max_load - this_load >= NICE_TO_BIAS_PRIO(0) * SCHED_LOAD_SCALE*2) {
 			*imbalance = NICE_TO_BIAS_PRIO(0);
 			return busiest;
 		}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux