On Fri, 6 Jan 2006, Eric Dumazet wrote:
David Lang a écrit :
Ok, so if you have large numbers of CPU's and large page sizes it's not
useful. however, what about a 2-4 cpu machine with 4k page sizes, 8-32G of
ram (a not unreasonable Opteron system config) that will be running
5,000-20,000 processes/threads?
Dont forget 'struct files_struct' are shared between threads of one process.
So may benefit from this 'special cache' only if you plan to run 20.000
processes.
Ok, that's why I as asking
I know people argue that programs that do such things are bad (and I
definantly agree that they aren't optimized), but the reality is that some
workloads are like that. if a machine is being built for such uses
configuring the kernel to better tolorate such use may be useful
If 20.000 process runs on a machine, I doubt the main problem of sysadmin is
about the 'struct files_struct' placement in memory :)
I have some boxes that routinely sit with 3,500-4,000 processes (fork
heavy workload, ~1400 forks/sec so far) that topple over when they go much
over 10,000 processes.
I'm only running 32 bit kernels with 1G of ram available, (no himem), I've
been assuming that ram was my limiting factor, but it hasn't been
enough of an issue to really track down yet :-)
David Lang
--
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.
-- C.A.R. Hoare
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]