Problem:
The current implementation of find_busiest_group() recognizes that
approximately equal average loads per task for each group/queue are
desirable (e.g. this condition will increase the probability that the
top N highest priority tasks on an N CPU system will be on different
CPUs) by being slightly more aggressive when *imbalance is small but the
average load per task in "busiest" group is more than that in "this"
group. Unfortunately, the amount moved from "busiest" to "this" is too
small to reduce the average load per task on "busiest" (at best there
will be no change and at worst it will get bigger).
Solution:
Increase the amount of load moved from "busiest" to "this" in these
circumstances while making sure that the amount of load moved won't
increase the (absolute) difference in the two groups' total weighted
loads. A task with a weighted load greater than the average needs to be
moved to cause the average to be reduced.
NB This makes no difference to load balancing for the case where all
tasks have nice==0.
Signed-off-by: Peter Williams <[email protected]>
--
Peter Williams [email protected]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
Index: MM-2.6.17-rc1-mm2/kernel/sched.c
===================================================================
--- MM-2.6.17-rc1-mm2.orig/kernel/sched.c 2006-04-10 10:46:53.000000000 +1000
+++ MM-2.6.17-rc1-mm2/kernel/sched.c 2006-04-10 14:16:32.000000000 +1000
@@ -2258,16 +2258,20 @@ find_busiest_group(struct sched_domain *
if (*imbalance < busiest_load_per_task) {
unsigned long pwr_now = 0, pwr_move = 0;
unsigned long tmp;
- unsigned int imbn = 2;
- if (this_nr_running) {
+ if (this_nr_running)
this_load_per_task /= this_nr_running;
- if (busiest_load_per_task > this_load_per_task)
- imbn = 1;
- } else
+ else
this_load_per_task = SCHED_LOAD_SCALE;
- if (max_load - this_load >= busiest_load_per_task * imbn) {
+ if (busiest_load_per_task > this_load_per_task) {
+ unsigned long dld = max_load - this_load;
+
+ if (dld > busiest_load_per_task) {
+ *imbalance = (dld + busiest_load_per_task) / 2;
+ return busiest;
+ }
+ } else if (max_load - this_load >= busiest_load_per_task * 2) {
*imbalance = busiest_load_per_task;
return busiest;
}
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]