Re: [patch 3/3] cpusets: add memory_spread_user option

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 26 Oct 2007, Paul Jackson wrote:

> > Will it handle the case of MPOL_INTERLEAVE policy on a shm segment that
> > is mapped by tasks in different, possibly disjoint, cpusets.  Local
> > allocation does, and my patch does.  That was one of the primary
> > goals--to address an issue that Christoph has with shared policies.
> > cpusets really muck these up!
> 
> It probably won't handle that.  I don't get along too well with shmem.

IMHO shmem policy support is pretty much messed up (seems that we 
introduced new races by trying to fix the refcounts). I tend to ignore the 
stuff unless it impacts regular shared or regular memory. Before we do any 
of this fancy stuff lets at least get the refcount handling right?

> Can you to an anti-shmem bigot how MPOL_INTERLEAVE should work with
> shmem segments mapped in diverse ways by different tasks in different
> cpusets?  What would be the key attribute(s) of a proper solution?
> Maybe if we keep it simple enough, I can avoid mucking it up too much
> this time around.

With relative nodemasks we could have a MPOL_INTERLEAVE policy working in 
multiple cpuset contexts. If nodes 0 and 1 are set in a nodemask then the 
first two nodes of the current cpuset are interleaved through. Nodes that 
do not exist are ignored. So if there is no second node then 
MPOL_INTERLEAVE becomes a noop.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux