large files unnecessary trashing filesystem cache?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(please note, i'am not subscribed to the list, please CC me on reply)

Story:
Once in while we have a discussion at the vdr (video disk recorder) mailing 
list about very large files trashing the filesystems memory cache leading to 
unnecessary delays accessing directory contents no longer cached.

This program and certainly all applications that deal with very large files 
only read once (much larger than usual memory)  - it happens that all other 
cached blocks of the filessystem are removed from memory solely to keep as 
much as possible of that file in memory, which seems to be a bad strategy in 
most situations.

Of course one could always implement f_advise-calls in all applications, but i 
suggest a discussion if a maximum (configurable) in-memory-cache on a 
per-file base should be implemented in linux/mm or where this belongs.

My guess was, it has something to do with mm/readahead.c, a test limiting the 
result of the function "max_sane_readahead(...) to 8 MBytes as a quick and 
dirty test did not solve the issue, but i might have done something wrong.

I've searched the archive but could not find a previous discussion - is this a 
new idea?

It would be interesting to discuss if and when this proposed feature could 
lead to better performance or has any unwanted side effects.

Thanks for ideas on that issue.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux