Re: Possible ways of dealing with OOM conditions.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Let me briefly describe your approach and possible drawbacks in it.
> You start reserving some memory when systems is under memory pressure.
> when system is in real trouble, you start using that reserve for special
> tasks mainly for network path to allocate packets and process them in
> order to get committed some memory swapping.
> 
> So, the problems I see here, are following:
> 1. it is possible that when you are starting to create a reserve, there
> will not be enough memeory at all. So the solution is to reserve in
> advance.

Swap is usually enabled at startup, but sure, if you want you can mess
this up.

> 2. You differentiate by hand between critical and non-critical
> allocations by specifying some kernel users as potentially possible to
> allocate from reserve. 

True, all sockets that are needed for swap, no-one else.

> This does not prevent from NVIDIA module to
> allocate from that reserve too, does it?

All users of the NVidiot crap deserve all the pain they get.
If it breaks they get to keep both pieces.

> And you artificially limit
> system to process only tiny bits of what it must do, thus potentially
> leaking pathes which must use reserve too.

How so? I cover pretty much every allocation needed to process an skb by
setting PF_MEMALLOC - the only drawback there is that the reserve might
not actually be large enough because it covers more allocations that
were considered. (thats one of the TODO items, validate the reserve
functions parameters)

> So, solution is to have a reserve in advance, and manage it using
> special path when system is in OOM. So you will have network memory
> reserve, which will be used when system is in trouble. It is very
> similar to what you had.
> 
> But the whole reserve can never be used at all, so it should be used,
> but not by those who can create OOM condition, thus it should be
> exported to, for example, network only, and when system is in trouble,
> network would be still functional (although only critical pathes).

But the network can create OOM conditions for itself just fine. 

Consider the remote storage disappearing for a while (it got rebooted,
someone tripped over the wire etc..). Now the rest of the network
traffic keeps coming and will queue up - because user-space is stalled,
waiting for more memory - and we run out of memory.

There must be a point where we start dropping packets that are not
critical to the survival of the machine.

> Even further development of such idea is to prevent such OOM condition
> at all - by starting swapping early (but wisely) and reduce memory
> usage.

These just postpone execution but will not avoid it.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux