Re: large files unnecessary trashing filesystem cache?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Zitat von Andrew Morton <[email protected]>:

> An obvious approach would be an LD_PRELOAD thingy which modifies read() and
> write(), perhaps controlled via an environment variable.  AFAIK nobody has
> even attempted this.

Sounds interesting.

> A decent kernel implementation would be to add a max_resident_pages to
> struct file_struct and to use that to perform drop-behind within read() and
> write().  That's a bit of arithmetic and a call to
> invalidate_mapping_pages().  The userspace interface to that could be a
> linux-specific extension to posix_fadvise() or to fcntl().

Would still like to have a way to configure a "default file policy/heuristics"
for the system, just like i can choose IO-scheduler.

> 
> But that still requires that all the applications be modified.
> 
> So I'd also suggest a new resource limit which, if set, is copied into the
> applications's file_structs on open().  So you then write a little wrapper
> app which does setrlimit()+exec():
> 
> 	limit-cache-usage -s 1000 my-fave-backup-program <args>
> 
> Which will cause every file which my-fave-backup-program reads or writes to
> be limited to a maximum pagecache residency of 1000 kbytes.

Or make it another 'ulimit' parameter...

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux