Re: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Dave Hansen <[email protected]> wrote:

> > can you always, under any circumstance hot unplug RAM with these patches 
> > applied? If not, do you have any expectation to reach 100%?
> 
> With these patches, no.  There are currently some very nice, 
> pathological workloads which will still cause fragmentation.  But, in 
> the interest of incremental feature introduction, I think they're a 
> fine first step.  We can effectively reach toward a more comprehensive 
> solution on top of these patches.
> 
> Reaching truly 100% will require some other changes such as being able 
> to virtually remap things like kernel text.

then we need to see that 100% solution first - at least in terms of 
conceptual steps. Not being able to hot-unplug RAM in a 100% way wont 
satisfy customers. Whatever solution we choose, it must work 100%. Just 
to give a comparison: would you be content with your computer failing to 
start up apps 1 time out of 100, saying that 99% is good enough? Or 
would you call it what it is: buggy and unreliable?

to stress it: hot unplug is a _feature_ that must work 100%, _not_ some 
optimization where 99% is good enough. This is a feature that people 
will be depending on if we promise it, and 1% failure rate is not 
acceptable. Your 'pathological workload' might be customer X's daily 
workload. Unless there is a clear definition of what is possible and 
what is not (which definition can be relied upon by users), having a 99% 
solution is much worse than the current 0% solution!

worse than that, this is a known _hard_ problem to solve in a 100% way, 
and saying 'this patch is a good first step' just lures us (and 
customers) into believing that we are only 1% away from the desired 100% 
solution, while nothing could be further from the truth. They will 
demand the remaining 1%, but can we offer it? Unless you can provide a 
clear, accepted-upon path towards the 100% solution, we have nothing 
right now.

I have no problems with using higher-order pages for performance 
purposes [*], as long as 'failed' allocation (and freeing) actions are 
user-invisible. But the moment you make it user-visible, it _must_ work 
in a deterministic way!

	Ingo

[*] in which case any slowdown in the page allocator must be offset by
    the gains.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux