> From: Gordon Messmer <yinyang@xxxxxxxxx> > Subject: Re: How much swap? > > Uno Engborg wrote: >> >> Would I need a swap anyway. There is some old rule of thumb to have >> twice as much virtual memory as you have physical RAM, but that >> sounds a bit ridiculous as it would take a lot of time to swap >> in/out this much memory from disk. > > I used to think it was kind of ridiculous, myself, until I discovered a > quirk of Posix's behavior. If I get any of these details wrong, I > invite corrections: > > When a process called fork() on an old unix system, the OS required an > amount of free memory equal to the size of the process, plus the size of > a process table entry. If this memory wasn't free, fork() would fail. > After fork(), the OS would copy the full set of memory from the parent > process to the new process. Since fork() is so often followed by > exec(), which throws away all of that memory, modern unix systems don't > copy the whole set of memory when a process forks. However, they still > require that there is enough memory to do so (at least normally; Linux > has an "overcommit" feature that you can enable). Not sure if the fork/exec issue applies to paged memory versus swap (i.e., where the entire process is swapped versus memory pages). It's been a long time for me too :D I'm also not certain if this applies to Linux, but I recall that certain other Unices require twice the page space as physical RAM depending on configurations. For example, some OSes will automatically write to both the in-memory pages and the corresponding page on disk. The idea being that if the in-memory page needs to paged out, it can do so with just an update to the page table rather than a full write. This makes a page-out instantaneous. I understand that in certain VM implementations it was simpler to just have a direct mapped copy of all pages on disk. This required that in order to increase the available memory (real + VM) you'd need at least as much page space as you had real memory. With very large memory systems (above 4G for example), there are probably other considerations. The size of the page table on a 32-bit system may be fixed, but on machines with huge memories, the page table itself may grow/shrink. The consequence is that trying to allocate too large a page area could end up shrinking your usable physical memory. On an already loaded system that's near capacity adding page space could cause worse problems. I'm not sure that I'd run entirely without page space, however. I seem to recall that unused pages can get paged out entirely. I.e., if the process is not doing much it won't cause a memory hit. Also keep in mind that 8G of disk space may not be all that much relative to the size of disks today. That is, 32M of page/swap space back in the day, was probably relatively more than 8G today. > With 8GB of RAM, that may not affect you directly, but it may, too. > Lets say that you have no swap, and you were using a 3d modeler, or > high-end graphics package. If that application was using 5GB of your 8, > and needed to launch a helper application, like an out-of-process perl > or python script, it wouldn't be able to do so. Even though that script > only needs a few MB of the GB you have free, the parent is too big to > fork(), so it can't spawn new processes. > > So, when you decide whether or not to follow the traditional advice > offered about the amount of swap to allocate, you should first > understand how the Linux VM works: > http://www.redhat.com/magazine/001nov04/features/vm/ > > ...if you don't want to spend the time learning the details, I'd just > stick with the advice offered. > Take all statements with a grain of salt. I'm old and doddering now, and bit rot has surely set in on my recollections. -- * The Digital Hermit http://www.digitalhermit.com * Unix and Linux Solutions kwan@xxxxxxxxxxxxxxxxx