Ingo Molnar wrote:
* Jie Chen <[email protected]> wrote:
Since I am using affinity flag to bind each thread to a different
core, the synchronization overhead should increases as the number of
cores/threads increases. But what we observed in the new kernel is the
opposite. The barrier overhead of two threads is 8.93 micro seconds vs
1.86 microseconds for 8 threads (the old kernel is 0.49 vs 1.86). This
will confuse most of people who study the
synchronization/communication scalability. I know my test code is not
real-world computation which usually use up all cores. I hope I have
explained myself clearly. Thank you very much.
btw., could you try to not use the affinity mask and let the scheduler
manage the spreading of tasks? It generally has a better knowledge about
how tasks interrelate.
Ingo
Hi, Ingo:
I just disabled the affinity mask and reran the test. There were no
significant changes for two threads (barrier overhead is around 9
microseconds). As for 8 threads, the barrier overhead actually drops a
little, which is good. Let me know whether I can be any help. Thank you
very much.
--
###############################################
Jie Chen
Scientific Computing Group
Thomas Jefferson National Accelerator Facility
12000, Jefferson Ave.
Newport News, VA 23606
(757)269-5046 (office) (757)269-6248 (fax)
[email protected]
###############################################
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]