Peter Williams wrote:
Peter Williams wrote:
Martin Bligh wrote:
But I was thinking more about the code that (in the original)
handled the case where the number of tasks to be moved was less
than 1 but more than 0 (i.e. the cases where "imbalance" would have
been reduced to zero when divided by SCHED_LOAD_SCALE). I think
that I got that part wrong and you can end up with a bias load to
be moved which is less than any of the bias_prio values for any
queued tasks (in circumstances where the original code would have
rounded up to 1 and caused a move). I think that the way to handle
this problem is to replace 1 with "average bias prio" within that
logic. This would guarantee at least one task with a bias_prio
small enough to be moved.
I think that this analysis is a strong argument for my original
patch being the cause of the problem so I'll go ahead and generate
a fix. I'll try to have a patch available later this morning.
Attached is a patch that addresses this problem. Unlike the
description above it does not use "average bias prio" as that
solution would be very complicated. Instead it makes the assumption
that NICE_TO_BIAS_PRIO(0) is a "good enough" for this purpose as
this is highly likely to be the median bias prio and the median is
probably better for this purpose than the average.
Signed-off-by: Peter Williams <[email protected]>
Doesn't fix the perf issue.
OK, thanks. I think there's a few more places where SCHED_LOAD_SCALE
needs to be multiplied by NICE_TO_BIAS_PRIO(0). Basically, anywhere
that it's added to, subtracted from or compared to a load. In those
cases it's being used as a scaled version of 1 and we need a scaled
This would have been better said as "the load generated by 1 task"
rather than just "a scaled version of 1". Numerically, they're the same
but one is clearer than the other and makes it more obvious why we need
NICE_TO_BIAS_PRIO(0) * SCHED_LOAD_SCALE and where we need it.
version of NICE_TO_BIAS_PRIO(0). I'll have another patch later today.
I'm just testing this at the moment.
Attached is a new patch to fix the excessive idle problem. This patch
takes a new approach to the problem as it was becoming obvious that
trying to alter the load balancing code to cope with biased load was
harder than it seemed.
This approach reverts to the old load values but weights them according
to tasks' bias_prio values. This means that any assumptions by the load
balancing code that the load generated by a single task is
SCHED_LOAD_SCALE will still hold. Then, in find_busiest_group(), the
imbalance is scaled back up to bias_prio scale so that move_tasks() can
move biased load rather than tasks.
One advantage of this is that when there are no non zero niced tasks the
processing will be mathematically the same as the original code.
Kernbench results from a 2 CPU Celeron 550Mhz system are:
Average Optimal -j 8 Load Run:
Elapsed Time 1056.16 (0.831102)
User Time 1906.54 (1.38447)
System Time 182.086 (0.973386)
Percent CPU 197 (0)
Context Switches 48727.2 (249.351)
Sleeps 27623.4 (413.913)
This indicates that, on average, 98.9% of the total available CPU was
used by the build.
Signed-off-by: Peter Williams <[email protected]>
BTW I think that we need to think about a slightly more complex nice to
bias mapping function. The current one gives a nice==19 1/20 of the
bias of a nice=0 task but only gives nice=-20 tasks twice the bias of a
nice=0 task. I don't think this is a big problem as the majority of non
nice==0 tasks will have positive nice but should be looked at for a
future enhancement.
Peter
--
Peter Williams [email protected]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
Index: MM-2.6.X/kernel/sched.c
===================================================================
--- MM-2.6.X.orig/kernel/sched.c 2006-01-13 14:53:34.000000000 +1100
+++ MM-2.6.X/kernel/sched.c 2006-01-13 15:11:19.000000000 +1100
@@ -1042,7 +1042,8 @@ void kick_process(task_t *p)
static unsigned long source_load(int cpu, int type)
{
runqueue_t *rq = cpu_rq(cpu);
- unsigned long load_now = rq->prio_bias * SCHED_LOAD_SCALE;
+ unsigned long load_now = (rq->prio_bias * SCHED_LOAD_SCALE) /
+ NICE_TO_BIAS_PRIO(0);
if (type == 0)
return load_now;
@@ -1056,7 +1057,8 @@ static unsigned long source_load(int cpu
static inline unsigned long target_load(int cpu, int type)
{
runqueue_t *rq = cpu_rq(cpu);
- unsigned long load_now = rq->prio_bias * SCHED_LOAD_SCALE;
+ unsigned long load_now = (rq->prio_bias * SCHED_LOAD_SCALE) /
+ NICE_TO_BIAS_PRIO(0);
if (type == 0)
return load_now;
@@ -1322,7 +1324,8 @@ static int try_to_wake_up(task_t *p, uns
* of the current CPU:
*/
if (sync)
- tl -= p->bias_prio * SCHED_LOAD_SCALE;
+ tl -= (p->bias_prio * SCHED_LOAD_SCALE) /
+ NICE_TO_BIAS_PRIO(0);
if ((tl <= load &&
tl + target_load(cpu, idx) <= SCHED_LOAD_SCALE) ||
@@ -2159,7 +2162,7 @@ find_busiest_group(struct sched_domain *
}
/* Get rid of the scaling factor, rounding down as we divide */
- *imbalance = *imbalance / SCHED_LOAD_SCALE;
+ *imbalance = (*imbalance * NICE_TO_BIAS_PRIO(0)) / SCHED_LOAD_SCALE;
return busiest;
out_balanced:
@@ -2472,7 +2475,8 @@ static void rebalance_tick(int this_cpu,
struct sched_domain *sd;
int i;
- this_load = this_rq->prio_bias * SCHED_LOAD_SCALE;
+ this_load = (this_rq->prio_bias * SCHED_LOAD_SCALE) /
+ NICE_TO_BIAS_PRIO(0);
/* Update our load */
for (i = 0; i < 3; i++) {
unsigned long new_load = this_load;
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]