Alan wrote:
The "oom-thresh" value maps to the max expected memory consumption for
that process. As long as a process uses less memory than the specified
threshold, then it is immune to the oom-killer.
You've just introduced a deadlock. What happens if nobody is over that
predicted memory and the kernel uses more resource ?
Based on the discussion with Jesper, we fall back to regular behaviour.
(Or possibly hang or reboot, if we added another switch).
On an embedded platform this allows the designer to engineer the system
and protect critical apps based on their expected memory consumption.
If one of those apps goes crazy and starts chewing additional memory
then it becomes vulnerable to the oom killer while the other apps remain
protected.
That is why we have no-overcommit support. Now there is an argument for
a meaningful rlimit-as to go with it, and together I think they do what
you really need.
No overcommit only protects the system as a whole, not any particular
processes. The purpose of this is to protect specific daemons from
being killed when the system as a whole is short on memory. Same
rationale as for oomadj, but different knob to twiddle.
Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]