Christoph Lameter <[email protected]> wrote:
>
> Some allocations are restricted to a limited set of nodes (due to memory
> policies or cpuset constraints). If the page allocator is not able to find
> enough memory then that does not mean that overall system memory is low.
>
> In particular going postal and more or less randomly shooting at processes
> is not likely going to help the situation but may just lead to suicide (the
> whole system coming down).
>
> It is better to signal to the process that no memory exists given the
> constraints that the process (or the configuration of the process) has
> placed on the allocation behavior. The process may be killed but then the
> sysadmin or developer can investigate the situation. The solution is similar
> to what we do when running out of hugepages.
>
> This patch adds a check before we kill processes. At that
> point performance considerations do not matter much so we just scan the zonelist
> and reconstruct a list of nodes. If the list of nodes does not contain all
> online nodes then this is a constrained allocation and we should kill the
> currnet process.
Looks sane, thanks. I made constrained_alloc() inline, to give the
compiler the best-possible chance of eliminating the impossible-on-UMA
switch cases.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]