On Wednesday 17 January 2007 15:36, Paul Jackson wrote:
> > With a per node dirty limit ...
>
> What would this mean?
>
> Lets say we have a simple machine with 4 nodes, cpusets disabled.
There can be always NUMA policy without cpusets for once.
> Lets say all tasks are allowed to use all nodes, no set_mempolicy
> either.
Ok.
> If a task happens to fill up 80% of one node with dirty pages, but
> we have no dirty pages yet on other nodes, and we have a dirty ratio
> of 40%, then do we throttle that task's writes?
Yes we should actually. Every node should be able to supply
memory (unless extreme circumstances like mlock) and that much dirty
memory on a node will make that hard.
> I am surprised you are asking for this, Andi. I would have thought
> that on no-cpuset systems, the system wide throttling served your
> needs fine.
No actually people are fairly unhappy when one node is filled with
file data and then they don't get local memory from it anymore.
I get regular complaints about that for Opteron.
Dirty limit wouldn't be a full solution, but a good step.
> If not, then I can only guess that is because NUMA
> mempolicy constraints on allowed nodes are causing the same dirty page
> problems as cpuset constrained systems -- is that your concern?
That is another concern. I haven't checked recently, but it used
to be fairly simple to put a system to its knees by oversubscribing
a single node with a strict memory policy. Fixing that would be good.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]