On Sat, 01 Dec 2007 15:20:29 +0530
Balbir Singh <[email protected]> wrote:
> > In our experience, users are not good at figuring out how much memory
> > they really need. In general they tend to massively over-estimate
> > their requirements. So we want some way to determine how much of its
> > allocated memory a job is actively using, and how much could be thrown
> > away or swapped out without bothering the job too much.
>
> One would prefer the kernel provides the mechanism and user space
> provides the policy. The algorithms to assign limits can exist in user
> space and be supported by a good set of statistics.
With the /proc/refaults info, we can measure how much extra
memory each process group needs, if any.
As for how much memory a process group needs, at pageout time
we can check the fraction of pages that are accessed. If 60%
of the pages were recently accessed at pageout time and this
process group is spending little or no time waiting for refaults,
40% of the pages are *not* recently accessed and we can probably
reduce the amount of memory assigned to this group.
Page cache that has only been accessed once can also be
counted as "not recently accessed", since streaming file
IO should not increase the working set of the process group.
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]