Re: smpnice loadbalancing with high priority tasks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Siddha, Suresh B wrote:
Peter,

There are still issues which we need to address.. These are surfacing
as we are patching issue by issue(instead of addressing the root issue, which
is: presence of high priority tasks messes up load balancing of normal
priority tasks..)

for example

a) on a simple 4-way MP system, if we have one high priority and 4 normal
priority tasks, with smpnice we would like to see the high priority task
scheduled on one cpu, two other cpus getting one normal task each and the
fourth cpu getting the remaining two normal tasks. but with smpnice that extra normal priority task keeps jumping from one cpu to another cpu having
the normal priority task.

This is because of the busiest_has_loaded_cpus, nr_loaded_cpus logic.. We
are not including the cpu with high priority task in max_load calculations
but including that in total and avg_load calcuations.. leading to max_load <
avg_load and load balance between cpus running normal priority tasks(2 Vs 1)
will always show imbalanace as one normal priority and the extra normal
priority task will keep moving from one cpu to another cpu having
normal priority task..

I can't see anything like this in the code. Can you send a patch to fix what you think the problem in the is?

The effect you describe can be caused by other tasks running on the system (see below for fuller description).


b) on a simple DP system, if we have two high priority and two normal priority
tasks, ideally we should schedule one high and one normal priority task on each cpu.. current code doesn't find an imbalance if both the normal priority
tasks gets scheduled on the same cpu(running one high priority task)

This is one of my standard tests and it works for me. The only time the two normal priority tasks end up on the same CPU during my tests is when some other normal priority tasks (e.g. top, X.org) happen to be running when load balancing occurs. This causes an imbalance and tasks that aren't actually on the CPU get moved to fix the imbalance. This is usually the test tasks as (because they are hard spinners) they have a smaller interactive bonus than the other tasks and get preempted as a result.


there may not be benchmarks which expose these conditions.. but I think
we haven't addressed the corner case conditions well enough..

Both of the problems you describe above are probably caused by the fact that there are other tasks (than those in your tests) running on your system and if they happen to be on a run queue at the time load balancing is done they will cause an imbalance to be detected that is different to what you expect based on a simplistic view of the world that only considers the test tasks. When this happens tasks will get moved to restore balance. This (in my opinion) is what you are seeing.

Peter
--
Peter Williams                                   [email protected]

"Learning, n. The kind of ignorance distinguishing the studious."
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux