Re: [PATCH 00/34] mm: Page Replacement Policy Framework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Linus Torvalds wrote:

On Thu, 23 Mar 2006, Marcelo Tosatti wrote:
IMHO the page replacement framework intent is wider than fixing the currently known performance problems.

It allows easier implementation of new algorithms, which are being
invented/adapted over time as necessity appears.

Yes and no.

It smells wonderful for a pluggable page replacement standpoint, but here's a couple of observations/questions: a) the current one actually seems to have beaten the on-comers (except for loads that were actually made up to try to defeat LRU)
b) is page replacement actually a huge issue?

Now, the reason I ask about (b) is that these days, you buy a Mac Mini, and it comes with half a gig of RAM, and some apple users seem to worry about the fact that the UMA graphics removes 50MB or something of that is a problem.

IOW, just under half a _gigabyte_ of RAM is apparently considered to be low end, and this is when talking about low-end (modern) hardware!

And don't tell me that the high-end people care, because both databases (high end commercial) and video/graphics editing (high end desktop) very much do _not_ care, since they tend to try to do their own memory management anyway.

One example (which I mentioned several times) is power saving:

PB-LRU: A Self-Tuning Power Aware Storage Cache Replacement Algorithm
for Conserving Disk Energy.

Please name a load that really actually hits the page replacement today.
Any load where things goes into swap?
Sure - I have 512MB in my machines, I still tend to
get 20-50 MB in swap after a while.  And now and then
I wait for stuff to swap in again.

I don't claim the current system is bad, and a more gradual
appraoach may very well be the better way.  But if someone can
demonstrate an improvement, then I'm for it.  Getting an
improved selection for the 50M in swap will be noticeable at times.

Remember, people compensate the bigger memory with bigger apps.
Linux should run those apps well. ;-)

It smells like university research to me.

And don't flame me: I'm perfectly happy to be shown to be wrong. I just get a very strong feeling that the people who care about tight memory conditions and perhaps about page replacement are the same people who think that our kernel is too big - the embedded people.
Well, how about desktop users?  Snappyness is a nice thing.
The "enough memory" argument can be turned around too. If the power users don't care because they have the memory - then
they shouldn't worry about someone changing the replacement
algorithms. :-)

What I'm trying to say is that page replacement hasn't been what seems to have worried people over the last year or two. We had some ugly problems in the early 2.4.x timeframe, and I'll claim that most (but not all) of those were related to highmem/zoning issues which we largely solved. Which was about page replacement, but really a very specific issue within that area.

So seriously, I suspect Andrew's "Holy cow" comes from the fact that he is more worried about VM maintainability and stability than page replacement. I certainly am.
Sure, the incremental approach is good, and this replaceable
system may be a thing for interested VM developers.
But there is definitely interest if they can show improvement.

Helge Hafting

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux