On Wed, 2007-05-16 at 13:44 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
>
> > > How does all of this interact with
> > >
> > > 1. cpusets
> > >
> > > 2. dma allocations and highmem?
> > >
> > > 3. Containers?
> >
> > Much like the normal kmem_cache would do; I'm not changing any of the
> > page allocation semantics.
>
> So if we run out of memory on a cpuset then network I/O will still fail?
>
> I do not see any distinction between DMA and regular memory. If we need
> DMA memory to complete the transaction then this wont work?
If network relies on slabs that are cpuset constrained and the page
allocator reserves do not match that, then yes, it goes bang.
> > But its wanted to try the normal cpu_slab path first to detect that the
> > situation has subsided and we can resume normal operation.
>
> Is there some indicator somewhere that indicates that we are in trouble? I
> just see the ranks.
Yes, and page->rank will only ever be 0 if the page was allocated with
ALLOC_NO_WATERMARKS, and that only ever happens if we're in dire
straights and entitled to it.
Otherwise it'll be ALLOC_WMARK_MIN or somesuch.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]