On Thu, 4 Aug 2005, Andi Kleen wrote:
> I noticed that even 64bit architectures have a ridiculously low
> max limit on shared memory segments by default:
>
> #define SHMMAX 0x2000000 /* max shared seg size (bytes) */
> #define SHMMNI 4096 /* max num of segs system wide */
> #define SHMALL (SHMMAX/PAGE_SIZE*(SHMMNI/16)) /* max shm system wide (pages) */
>
> Even on 32bit architectures it is far too small and doesn't
> make much sense. Does anybody remember why we even have this limit?
To be like the UNIXes.
> IMHO per process shm mappings should just be controlled by the normal
> process and global mappings with the same heuristics as tmpfs
> (by default max memory / 2 or more if shmfs is mounted with more)
> Actually I suspect databases will usually want to use more
> so it might even make sense to support max memory - 1/8*max_memory
>
> I would propose to get rid of of shmmax completely
> and only keep the old shmall sysctl for compatibility.
Anton proposed raising the limits last autumn, but I was a bit
discouraging back then, having noticed that even Solaris 9 was more
restrictive than Linux. They seem to be ancient traditional limits
which everyone knows must be raised to get real work done.
It's possible that if we raise the limits, installation
of this or that application will then lower them again?
I don't think my opinion is worth much on this:
what would the distro tuners like to see there?
Hugh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|