Re: Avoid allocating during interleave from almost full nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 6 Nov 2006, Andrew Morton wrote:

> I'm referring to the metadata rather than to the pages themselves: the zone
> structure at least.  I bet there are a couple of cache misses in there.

Yes, in particular in large systems.

> > The number 
> > of pages to take will vary depending on the size of the shared data. For 
> > shared data areas that are just a couple of pages this wont work.
> 
> What is "shared data"?

Interleave is used for data accessed from many nodes otherwise one would 
prefer to allocate from the current zone. The shared data may be very 
frequently accessed from multiple nodes and one would like different NUMA 
nodes to respond to these requests.

> > > Umm, but that's exactly what the patch we're discussing will do.
> > Not if we have a set of remaining nodes.
> 
> Yes it is.  You're proposing taking an arbitrarily large number of
> successive pages from the same node rather than interleaving the allocations.
> That will create "hotspots or imbalances" (whatever they are).

No I proposed to go round robin over the remaining nodes. The special case 
of one node left could be dealt with.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux