Re: [RFC][PATCH 2/9] deadlock prevention core

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2006-08-14 at 00:07 -0700, Andrew Morton wrote:
> On Mon, 14 Aug 2006 08:45:40 +0200
> Peter Zijlstra <[email protected]> wrote:
> 
> > On Sun, 2006-08-13 at 22:22 -0700, Andrew Morton wrote:
> > > On Mon, 14 Aug 2006 07:03:55 +0200
> > > Peter Zijlstra <[email protected]> wrote:
> > > 
> > > > On Sun, 2006-08-13 at 21:58 -0700, Andrew Morton wrote:
> > > > > On Mon, 14 Aug 2006 06:40:53 +0200
> > > > > Peter Zijlstra <[email protected]> wrote:
> > > > > 
> > > > > > Testcase:
> > > > > > 
> > > > > > Mount an NBD device as sole swap device and mmap > physical RAM, then
> > > > > > loop through touching pages only once.
> > > > > 
> > > > > Fix: don't try to swap over the network.  Yes, there may be some scenarios
> > > > > where people have no local storage, but it's reasonable to expect anyone
> > > > > who is using Linux as an "enterprise storage platform" to stick a local
> > > > > disk on the thing for swap.
> > > > 
> > > > I wish you were right, however there seems to be a large demand to go
> > > > diskless and swap over iSCSI because disks seem to be the nr. 1 failing
> > > > piece of hardware in systems these days.
> > > 
> > > We could track dirty anonymous memory and throttle.
> > > 
> > > Also, there must be some value of /proc/sys/vm/min_free_kbytes at which a
> > > machine is no longer deadlockable with any of these tricks.  Do we know
> > > what level that is?
> > 
> > Not sure, the theoretical max amount of memory one can 'lose' in socket
> > wait queues is well over the amount of physical memory we have in
> > machines today (even for SGI); this combined with the fact that we limit
> > the memory in some way to avoid DoS attacks, could make for all memory
> > to be stuck in wait queues. Of course this becomes rather more unlikely
> > for ever larger amounts of memory. But unlikely is never a guarantee.
> 
> What is a "socket wait queue" and how/why can it consume so much memory?
> 
> Can it be prevented from doing that?
> 
> If this refers to the socket buffers, they're mostly allocated with
> at least __GFP_WAIT, aren't they?

Wherever it is that packets go if the local end is tied up and cannot
accept them instantly. The simple but prob wrong calculation I made for
evgeniy is: suppose we have 64k sockets, each socket can buffer up to
128 packets, and each packet can be up to 16k (roundup for jumboframes)
large, that makes for 128G of memory. This calculation is wrong on
several points (we can have >64k sockets, and I have no idea on the 128)
but the order of things doesn't get better.

> > > > > That leaves MAP_SHARED, but mm-tracking-shared-dirty-pages.patch will fix
> > > > > that, will it not?
> > > > 
> > > > Will makes it less likely. One can still have memory pressure, the
> > > > remaining bits of memory can still get stuck in socket queues for
> > > > blocked processes.
> > > 
> > > But there's lots of reclaimable pagecache around and kswapd will free it
> > > up?
> > 
> > Yes, however it is possible for kswapd and direct reclaim to block on
> > get_request_wait() for the nbd/iscsi request queue by sheer misfortune.
> 
> Possibly there are some situations where kswapd will get stuck on request
> queues.  But as long as the block layer is correctly calling
> set_queue_congested(), these are easily avoidable via
> bdi_write_congested().

Right, and this might, regardless of what we're going to end up doing,
be a good thing to do.

> > In that case there will be no more reclaim; of course the more active
> > processes we have the unlikelier this will be. Still with the sheer
> > amount of cpu time invested in Linux its not a gamble we're likely to
> > never lose.
> 
> I suspect that with mm-tracking-shared-dirty-pages.patch, a bit of tuning
> and perhaps some bugfixing we can make this problem go away for all
> practical purposes.  Particularly if we're prepared to require local
> storage for swap (the paranoid can use RAID, no?).
> 
> Seem to me that more investigation of these options is needed before we can
> justify adding lots of hard-to-test complexity to networking?

Well, my aim here, as disgusting as you might think it is, is to get
swap over network working. I sympathise with your stance of: don't do
that; but I have been set this task and shall try to get something that
does not offend people.

As for hard to test, I can supply some patches that would make SROG
(still find the name horrid) the default network allocator so one could
more easily test the code paths. As for the dropping of packets, I could
supply a debug control to switch it on/off regardless of memory
pressure.

As for overall complexity, A simple fallback allocator that kicks in
when the normal allocation path fails, and some simple checks to drop
packets allocated in this fashion when not bound for critical sockets
doesn't seem like a lot of complexity to me.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux