On Fri, 29 Jul 2005, Matthijs van der Klip wrote: > Thanks for your answer. I'd like to clear things up: I don't think it's > MySQL that's not releasing the memory. I can do a clean boot of the > machine without MySQL running (chkconfig off'ed), do a couple of compiles > and see the 'Active' memory in /proc/meminfo rising steadily towards 6GB. > If I try to start MySQL after all this, it won't start because the malloc > fails. After a reboot I can start MySQL again. > > So basically this brings my question down to: is there some way to find > out why such a huge amount of memory is being 'Active' and would it be > possible to see what it is used for (I looked at slabtop, but 'Active' > memory obviously isn't 'Slab')? Secondly, why doesn't a malloc reclaim the > 'Active' memory? I have been playing around with 'fillmem' from the 'memtest' suite: http://carpanta.dc.fi.udc.es/~quintela/memtest/ This sheds some light on the situation as: A) Running it instructing it to use near to 8GB of memory, it reclaims the Active memory and I'm able to start MySQL again without rebooting. Now, this is anywhere from ideal, but it shows it is indeed possible to reclaim Active memory. B) If fillmem is able to allocate near to 8GB using simple malloc calls, why isn't MySQL/InnoDB? Some experimenting with the size of the blocks (default 1MB) brings me to believe that it works close to following: 1) When trying to allocate memory through a malloc call, the request is either denied or granted based on the total amount of memory menus the amount of Active memory. After doing some heavy compiles, my Active memory rises to close 6GB, so in that situation the largest block I can allocate using malloc is 8GB-6GB=2GB. 2) Interestingly enough it is possible to allocate far more memory than explained in 1) by using multiple malloc calls. I adjusted the fillmem sources to allocate 1GB blocks (instead of the default 1MB ones) and it is perfectly able to allocate 8 (makes a total of 8GB) of them, while a single malloc for more than 2GB fails. So a single malloc call is being tested against available memory (Total - Active), but multiple calls are not. 3) Allocating memory in 1) and 2) is one, but when it comes to using it (filling the memory with random values in the case of fillmem) some more magic is involved. Monitoring the amount of Active memory while fillmem is running, shows that the memory becomes active by actually using it (filling it with random values). This is where the actual amount of physical memory begins to matter. When I instruct fillmem to allocate (and fill) 9GB of memory, which is more than the 8GB of physical memory I have, eventually the OOM killer comes into action and kills the fillmem process. 4) When fillmem is finished, it's Active memory is released again, and the total amount of Active memory (/proc/meminfo) drops close to zero. This all raises a new question however. In the case of fillmem the Active memory is released again. Why is this not happening after a configure / make / make install episode? I have a particular compile ready that leaves close to 1GB of Active memory after each run. After running this a couple of times, the total amount of Active memory rises to close of 6GB again. Does this mean something in this compile process is leaking memory? Would that explain why the Active memory is not released again as soon as the compiler, linker etc. have finished their work? I am not the first one to experience this problem by the way: http://lists.debian.org/debian-kernel/2004/12/msg00410.html Best regards, -- Matthijs van der Klip System Administrator Spill E-Projects The Netherlands