I noticed that even 64bit architectures have a ridiculously low
max limit on shared memory segments by default:
#define SHMMAX 0x2000000 /* max shared seg size (bytes) */
#define SHMMNI 4096 /* max num of segs system wide */
#define SHMALL (SHMMAX/PAGE_SIZE*(SHMMNI/16)) /* max shm system wide (pages) */
Even on 32bit architectures it is far too small and doesn't
make much sense. Does anybody remember why we even have this limit?
IMHO per process shm mappings should just be controlled by the normal
process and global mappings with the same heuristics as tmpfs
(by default max memory / 2 or more if shmfs is mounted with more)
Actually I suspect databases will usually want to use more
so it might even make sense to support max memory - 1/8*max_memory
I would propose to get rid of of shmmax completely
and only keep the old shmall sysctl for compatibility.
Comments?
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|