On Monday August 14, [email protected] wrote:
> On Sun, 2006-08-13 at 22:22 -0700, Andrew Morton wrote:
> >
> > We could track dirty anonymous memory and throttle.
> >
> > Also, there must be some value of /proc/sys/vm/min_free_kbytes at which a
> > machine is no longer deadlockable with any of these tricks. Do we know
> > what level that is?
>
> Not sure, the theoretical max amount of memory one can 'lose' in socket
> wait queues is well over the amount of physical memory we have in
> machines today (even for SGI); this combined with the fact that we limit
> the memory in some way to avoid DoS attacks, could make for all memory
> to be stuck in wait queues. Of course this becomes rather more unlikely
> for ever larger amounts of memory. But unlikely is never a guarantee.
What is the minimum amount of memory we need to reserve for each
socket? 1K? 1 page? Call it X
Suppose that whenever a socket is created (or bound or connected or
whatever is right) we first allocate that much to a recv pool.
If any socket has less than X queued, then it is allowed to allocate
up to a total of X from the reserve pool. After that it can only
receive when memory can be allocated from elsewhere. Then we will
never block on recv.
Note that X doesn't need to be the biggest possible incoming message.
It only needs to be enough to get an 'ack' over any possible network
storage protocol with any possible layering. I suspect that it well
within one page.
Would it be too much waste to reserve one page for every idle socket?
Does this have some fatal flaw?
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]