[PATCH] sched: improve stability of smpnice load balancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Problem:

Due to an injudicious piece of code near the end of find_busiest_group() smpnice load balancing is too aggressive resulting in excessive movement of tasks from one CPU to another.

Solution:

Remove the offending code. The thinking that caused it to be included became invalid when find_busiest_queue() was modified to use average load per task (on the relevant run queue) instead of SCHED_LOAD_SCALE when evaluating small imbalance values to see whether they warranted being moved.

Signed-off-by: Peter Williams <[email protected]>

Peter
--
Peter Williams                                   [email protected]

"Learning, n. The kind of ignorance distinguishing the studious."
 -- Ambrose Bierce
Index: MM-2.6.X/kernel/sched.c
===================================================================
--- MM-2.6.X.orig/kernel/sched.c	2006-03-29 16:18:37.000000000 +1100
+++ MM-2.6.X/kernel/sched.c	2006-03-29 16:20:37.000000000 +1100
@@ -2290,13 +2290,10 @@ find_busiest_group(struct sched_domain *
 		pwr_move /= SCHED_LOAD_SCALE;
 
 		/* Move if we gain throughput */
-		if (pwr_move > pwr_now)
-			*imbalance = busiest_load_per_task;
-		/* or if there's a reasonable chance that *imbalance is big
-		 * enough to cause a move
-		 */
-		 else if (*imbalance <= busiest_load_per_task / 2)
+		if (pwr_move <= pwr_now)
 			goto out_balanced;
+
+		*imbalance = busiest_load_per_task;
 	}
 
 	return busiest;

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux