Siddha, Suresh B wrote:
> I don't think it is the problem with sched_balance_self(). sched_balance_self()
> probably is doing the right thing based on the load that is present at the
> time of fork/exec. Once the node-1 becomes idle, we expect the two threads
> on node-0 cpu-1 to get distributed between the two nodes.
That happens indeed. Problem with that is that the thread which gets
migrated from cpu1 (node0) to cpu3 (node1) ends up with memory for the
working set being allocated from node0 memory because it ran on node0
for a short time. Which is a noticable performance hit on a NUMA system.
I think the scheduler should try harder to spread the threads across
cpus in a way that they can stay on the initial cpu instead of migrating
them later on.
> In my opinion, this patch is not the correct fix for the issue.
Sure, it's sort-od band-aid fix, thats why I'm trying to find something
better.
cheers,
Gerd
--
Gerd Hoffmann <[email protected]>
http://www.suse.de/~kraxel/julika-dora.jpeg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]