Re: malloc and 'Active' memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At 11:25 AM -0400 8/1/05, Deron Meranda wrote:
>On 8/1/05, Matthijs van der Klip <matthijs@xxxxxxxx> wrote:
>> On Fri, 29 Jul 2005, Rick Stevens wrote:
>> > Matthijs van der Klip wrote:
>> > Well, malloc() will fail if you request a chunk of memory and there
>> > isn't a SINGLE chunk available of that size.  So if memory gets fragged,
>> > there isn't a single 7GB chunk available and malloc() will fail.
>> > fillmem allocates in smaller chunks, then releases it all so the
>> > memory defragger can clean things up.
>>
>> I see what you mean and this was entirely my first thought when I ran into
>> this problem. However I was told (true or not true) that the malloc
>> implementation on Fedora Core 4 could not suffer from memory fragmentation
>> in the way I described (e.g. the same way you describe).
>
>The malloc(3) function is entirely userspace code, part of glibc.  It's
>not a kernel feature.
>
>The kernel usually allocates memory to processes via the sbrk(2) system
>call in whole pages.  From a user-process perspective this can not be
>fragmented since the pages are virtually mapped into a linear address
>space regardless of their actual physical address.  The malloc(3)
>function though will use mmap(2) instead of sbrk(2) when allocating
>very large chunks of memory; although it's still only getting whole
>pages at a time.
>
>It is true though that the malloc(3) function in glibc is very advanced and
>is mostly imune to many forms of traditional memory fragmentation.
>For an overview of the algorithms it uses read this document by
>Doug Lea: http://gee.cs.oswego.edu/dl/html/malloc.html
>
>It's not exactly the same as the Linux implmentation, but it's pretty
>close.  BTW the glibc version comes from ptmalloc2 by Wolfram Gloger,
>which is found here:  http://www.malloc.de/en/index.html
>
>Oh, from the malloc.c source code under the function free() we
>have this comment:
>
> * Unless disabled (using mallopt), freeing very large spaces will
> * when possible, automatically trigger operations that give
> * back unused memory to the system, thus reducing program footprint.
>
>You can fine-tune the agressiveness when memory is released back
>to the system by using the mallopt(3) call with the option of
>M_TRIM_THRESHOLD.  Or you can manually force a release using
>the malloc_trim(3) function.  (only the non-fragmented memory is
>released).

A task that has been running for some time may (will!) have fragemented
Virtual Address space.  A new task should not (though it is possible, say,
if a shared library has been set to load at a poorly chosen fixed address).

When a task exits, all its (non-shared) memory is returned to the OS.
Shared memory is returned to the OS when the last sharing task exits.

In the case Matthijs reports, he is exiting the old task and then starting
a new task.  The OS should be able to make all the old task's memory
available to the new task.  Perhaps the kernel has been "optimized" so it
reclaims the released memory in some lazy slow fashion, or perhaps (as
someone suggested) some other task is holding on to shared memory.  This
would appear to be some part of the configure / make / make install (GNU
Toolchain).

Possibly ps can be used to see if something starts and doesn't stop afterward?
____________________________________________________________________
TonyN.:'                       <mailto:tonynelson@xxxxxxxxxxxxxxxxx>
      '                              <http://www.georgeanelson.com/>


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux