To be honest your bigger problem is finding enough application parallelism and enough parallel user space apps. That and memory or I/O bandwidth on servers. The kernel will run on supercomputers with over 1000 processors. Not all workloads are handled well at that scale so if you threw 1000 random user instances on it you wouldn't get great results in a lot of cases. On a desktop its instructive to measure how many processor cores ever end up running at once. About the only time an 8 core box seems to use all cores at once is compiling kernels. On a server you've got more chance as you've often got a lot of work hitting the box from multiple sources, but in many cases then the bottleneck ends up being I/O and memory bandwidth unless you've got a board with separate RAM hanging off all the CPUs, and you spent real money on the I/O subsystem. This is actually one of the things that really hurt certain workloads. There are some that simply don't parallelise and the move to many cores and to clusters has left them stuck. Alan -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines