On Thu, 2007-08-23 at 14:11 +0400, Nikita Danilov wrote: > Peter Zijlstra writes: > > [...] > > > My idea is to extend kswapd, run cpus_per_node instances of kswapd per > > node for each of GFP_KERNEL, GFP_NOFS, GFP_NOIO. (basically 3 kswapds > > per cpu) > > > > whenever we would hit direct reclaim, add ourselves to a special > > waitqueue corresponding to the type of GFP and kick all the > > corresponding kswapds. > > There are two standard objections to this: > > - direct reclaim was introduced to reduce memory allocation latency, > and going to scheduler kills this. But more importantly, The part you snipped: > > Here is were the 'special' part of the waitqueue comes into order. > > > > Instead of freeing pages to the page allocator, these kswapds would hand > > out pages to the waiting processes in a round robin fashion. Only if > > there are no more waiting processes left, would the page go to the buddy > > system. should deal with that, it allows processes to quickly get some memory. > - it might so happen that _all_ per-cpu kswapd instances are > blocked, e.g., waiting for IO on indirect blocks, or queue > congestion. In that case whole system stops waiting for IO to > complete. In the direct reclaim case, other threads can continue > zone scanning. By running separate GFP_KERNEL, GFP_NOFS and GFP_NOIO kswapds this should not occur. Much like it now does not occur. This approach would make it work pretty much like it does now. But instead of letting each separate context run into reclaim we then have a fixed set of reclaim contexts which evenly distribute their resulting free pages. The possible down sides are: - more schedule()s, but I don't think these will matter when we're that deep into reclaim - less concurrency - but I hope 1 set per cpu is enough, we could up this if it turns out to really help.
Attachment:
signature.asc
Description: This is a digitally signed message part
- References:
- [RFC 0/9] Reclaim during GFP_ATOMIC allocs
- From: Christoph Lameter <[email protected]>
- [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Christoph Lameter <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Pavel Machek <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Christoph Lameter <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Peter Zijlstra <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Christoph Lameter <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Peter Zijlstra <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Nick Piggin <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Peter Zijlstra <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Nick Piggin <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Peter Zijlstra <[email protected]>
- Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- From: Nikita Danilov <[email protected]>
- [RFC 0/9] Reclaim during GFP_ATOMIC allocs
- Prev by Date: Re: [RFC 3/3] SGI Altix cross partition memory (XPMEM)
- Next by Date: [SERIAL] Fix modpost warning in serial driver.
- Previous by thread: Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- Next by thread: Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if PF_MEMALLOC is set
- Index(es):