On Sun, 20 May 2007, Peter Zijlstra wrote:
> I care about kernel allocations only. In particular about those that
> have PF_MEMALLOC semantics.
Hmmmm.. I wish I was more familiar with PF_MEMALLOC. ccing Nick.
> - set page->reserve nonzero for each page allocated with
> ALLOC_NO_WATERMARKS; which by the previous point implies that all
> available zones are below ALLOC_MIN|ALLOC_HIGH|ALLOC_HARDER
Ok that adds a new field to the page struct. I suggested a page flag in
slub before.
> - when a page->reserve slab is allocated store it in s->reserve_slab
> and do not update the ->cpu_slab[] (this forces subsequent allocs to
> retry the allocation).
Right that should work.
> All ALLOC_NO_WATERMARKS enabled slab allocations are served from
> ->reserve_slab, up until the point where a !page->reserve slab alloc
> succeeds, at which point the ->reserve_slab is pushed into the partial
> lists and ->reserve_slab set to NULL.
So the original issue is still not fixed. A slab alloc may succeed without
watermarks if that particular allocation is restricted to a different set
of nodes. Then the reserve slab is dropped despite the memory scarcity on
another set of nodes?
> Since only the allocation of a new slab uses the gfp zone flags, and
> other allocations placement hints they have to be uniform over all slab
> allocs for a given kmem_cache. Thus the s->reserve_slab/page->reserve
> status is kmem_cache wide.
No the gfp zone flags are not uniform and placement of page allocator
allocs through SLUB do not always have the same allocation constraints.
SLUB will check the node of the page that was allocated when the page
allocator returns and put the page into that nodes slab list. This varies
depending on the allocation context.
Allocations can be particular to uses of a slab in particular situations.
A kmalloc cache can be used to allocate from various sets of nodes in
different circumstances. kmalloc will allow serving a limited number of
objects from the wrong nodes for performance reasons but the next
allocation from the page allocator (or from the partial lists) will occur
using the current set of allowed nodes in order to ensure a rough
obedience to the memory policies and cpusets. kmalloc_node behaves
differently and will enforce using memory from a particular node.
SLAB is very strict in that area and will not allow serving objects from
the wrong node even with only kmalloc. Changing policy will immediately
change the per node queue that SLAB takes its objects from.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]