Re: large files unnecessary trashing filesystem cache?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bodo Eggert wrote:

I guess the solution would be using random cache eviction rather than
a FIFO. I never took a look the cache mechanism, so I may very well be
wrong here.
Instead of random cache eviction, you can make pages that were read in contiguously age faster than pages that were read in singly.

The motivation is that the cost of reading 64K vs 4K is almost the same (most of the cost is the seek), while the benefit for evicting 64K is 16 times that of evicting 4K. Over time, the kernel would favor expensive random-access pages over cheap streaming pages.

In a way, this is already implemented for inodes, which are aged more slowly than data pages.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux