On 10/02/2010 02:41 PM, Alan Cox wrote: > To be honest your bigger problem is finding enough application > parallelism and enough parallel user space apps. That and memory or I/O > bandwidth on servers. > > The kernel will run on supercomputers with over 1000 processors. Not all > workloads are handled well at that scale so if you threw 1000 random user > instances on it you wouldn't get great results in a lot of cases. > > On a desktop its instructive to measure how many processor cores ever end > up running at once. About the only time an 8 core box seems to use all > cores at once is compiling kernels. > > On a server you've got more chance as you've often got a lot of work > hitting the box from multiple sources, but in many cases then the > bottleneck ends up being I/O and memory bandwidth unless you've got a > board with separate RAM hanging off all the CPUs, and you spent real > money on the I/O subsystem. > > This is actually one of the things that really hurt certain workloads. > There are some that simply don't parallelise and the move to many cores > and to clusters has left them stuck. > > Alan Yes I agree. Most (if not all) of open source apps are not implemented with hundreds or thousands of parallel threads. But many commercial applications are heavily multithreaded and it is here that the optimum locking granularity will have a substantial value. I am wondering if we will ultimately end up splitting up massively parallel architectures into specialized sets where each set will be responsible for some major operating system task (i.e. operations) in order to reduce lock contentions and head off race conditions that can easily crop up when an optimum locking scheme is pursued. Perhaps some things from the past will be the future. Remember the IO channel? :) :) -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines