On Thu, Aug 04, 2005 at 03:23:38PM +0200, Andi Kleen wrote:
> On Thu, Aug 04, 2005 at 02:19:21PM +0100, Hugh Dickins wrote:
> > On Thu, 4 Aug 2005, Andi Kleen wrote:
> >
> > > I noticed that even 64bit architectures have a ridiculously low
> > > max limit on shared memory segments by default:
> > >
> > > #define SHMMAX 0x2000000 /* max shared seg size (bytes) */
> > > #define SHMMNI 4096 /* max num of segs system wide */
> > > #define SHMALL (SHMMAX/PAGE_SIZE*(SHMMNI/16)) /* max shm system wide (pages) */
> > >
> > > Even on 32bit architectures it is far too small and doesn't
> > > make much sense. Does anybody remember why we even have this limit?
> >
> > To be like the UNIXes.
>
> Ok, no other more fundamental reason ? :)
> I cannot think of any at least.
Those supply DEFAULT values for bootup time, and they can be
adjusted with sysctl. Existence of the limits is good.
Their easy tunability (even easier than at Solaris, where
you tune them only with a reboot) is even better.
SHM resources are non-swappable, thus I would not by default
let user programs go and allocate very much SHM spaces at all.
Such is usually spelled as: "denial-of-service-attack"
For that reason I would not raise builtin defaults either.
...
>
> I think we should just get rid of the per process limit and keep
> the global limit, but make it auto tuning based on available memory.
Err... No thanks! I would prefer to have even finer grained control
of how much SHM somebody can allocate. For normal user the value
might be zero, but for users in a group "SHM1" there could be a level
of N MB, etc. (Except that such mechanisms are rather complex...)
For dedicated servers there is no problem of letting there be single
global limit and its default value being in highish realms, but pick
any machine with multiple users running their own programs....
Consider all of them hostile (clueless can do as much damage
as any intentionally hostile.)
Mmm... Apparently X (and/or other parts of the desktop) do ask for
a number of shared memory segments. Default user allocation limit
can't be zero.
> That is still not very nice because that would likely keep it < available
> memory/2, but I suspect databases usually want more than that. So
> I would even make it bigger than tmpfs for reasonably big machines.
> Let's say
>
> if (main memory >= 1GB)
> maxmem = main memory - main memory/8
> else
> maxmem = main memory / 2
>
> possible increase the 4096 segments limit too, it seems quite low,
> or also auto tune based on memory.
>
> One possible problem with getting rid of /proc/sys/kernel/shmmni
> would be that some programs might read it and fail if it's not available. i
> So I would probably keep it read only but always return LONG_MAX.
>
> > I don't think my opinion is worth much on this:
> > what would the distro tuners like to see there?
>
> suse has shipped larger default limits for a long time.
> And all the databases and some other software documents
> increasing these values.
If there were kernels that are optimized for database servers, then
the hard-wired defaults might be risen, of course. On the other hand,
sysadmin knows for the best, and we have adjustment tools that don't
require kernel recompile, nor even reboot to be effective.
> -Andi
/Matti Aarnio
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|