Siddha, Suresh B wrote:
This time Ken Chen brought up this issue -- No it has nothing to do with
industry db benchmark ;-)
Even with the above mentioned Nick's patch in -mm, I see system livelock's
if for example I have 7000 processes pinned onto one cpu (this is on the
fastest 8-way system I have access to). I am sure there will be other
systems where this problem can be encountered even with lesser pin count.
Thanks for testing these patches in -mm, by the way.
We tried to fix this issue but as you know there is no good mechanism
in fixing this issue with out letting the regular paths know about this.
Our proposed solution is appended and we tried to minimize the affect on
fast path. It builds up on Nick's patch and once this situation is detected,
it will not do any more move_tasks as long as busiest cpu is always the
same cpu and the queued processes on busiest_cpu, their
cpu affinity remain same(found out by runqueue's "generation_num")
7000 running processes pinned into one CPU. I guess that isn't a
great deal :(
How important is this? Any application to real workloads? Even if
not, I agree it would be nice to improve this more. I don't know
if I really like this approach - I guess due to what it adds to
fastpaths.
Now presumably if the all_pinned logic is working properly in the
first place, and it is correctly causing balancing to back-off, you
could tweak that a bit to avoid livelocks? Perhaps the all_pinned
case should back off faster than the usual doubling of the interval,
and be allowed to exceed max_interval?
Any thoughts Ingo?
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]