On Thu, Aug 04, 2005 at 03:49:37PM -0700, Chen, Kenneth W wrote:
> Andi Kleen wrote on Thursday, August 04, 2005 6:24 AM
> > I think we should just get rid of the per process limit and keep
> > the global limit, but make it auto tuning based on available memory.
> > That is still not very nice because that would likely keep it < available
> > memory/2, but I suspect databases usually want more than that. So
> > I would even make it bigger than tmpfs for reasonably big machines.
> > Let's say
> >
> > if (main memory >= 1GB)
> > maxmem = main memory - main memory/8
>
> This might be too low on large system. We usually stress shm pretty hard
> for db application and usually use more than 87% of total memory in just
> one shm segment. So I prefer either no limit or a tunable.
With large system you mean >32GB right?
I think on a large systems some tuning is reasonable because they likely
have trained admins. I'm more worried on reasonable defaults for the
class of systems with 0-4GB
The /8 was to account for the overhead of page tables and mem_map and
leave some other memory for the system, but you're right it might be less
with hugetlbfs.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|