Re: large files unnecessary trashing filesystem cache?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Oct 19, 2005, at 13:58:37, Guido Fiala wrote:
Kernel could do the best to optimize default performance, applications that consider their own optimal behaviour should do so, all other files are kept under default heuristic policy (adaptable, configurable one)

Heuristic can be based on access statistic:

streaming/sequential can be guessed by getting exactly 100% cache hit rate (drop behind pages immediately),

What about a grep through my kernel sources or other linear search through a large directory tree? That would get exactly 100% cache hit rate which would cause your method to drop the pages immediately, meaning that subsequent greps are equally slow. I have enough memory to hold a couple kernel trees and I want my grepping to push OO.org out of RAM for a bit while I do my kernel development.


Cheers,
Kyle Moffett

--
I lost interest in "blade servers" when I found they didn't throw knives at people who weren't supposed to be in your machine room.
  -- Anthony de Boer


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux