Re: large files unnecessary trashing filesystem cache?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Wednesday 19 October 2005 13:10, [email protected] wrote:
> Zitat von Andrew Morton <[email protected]>:
> > So I'd also suggest a new resource limit which, if set, is copied into the
> > applications's file_structs on open().  So you then write a little wrapper
> > app which does setrlimit()+exec():
> > 
> > 	limit-cache-usage -s 1000 my-fave-backup-program <args>
> > 
> > Which will cause every file which my-fave-backup-program reads or writes to
> > be limited to a maximum pagecache residency of 1000 kbytes.
> 
> Or make it another 'ulimit' parameter...

Which is already there: There is an ulimit for "maximum RSS", 
which is at least a superset of "maximum pagecache residency".

This is already settable and known by many admins. But AFAIR it is not
honoured by the kernel completely, right?

But per file is a much better choice, since this would allow
concurrent streaming. This is needed to implement timeshifting at least[1].

So either I miss something or this is no proper solution yet.


Regards

Ingo Oeser

[1] Which is obviously done by some kind of on-disk FIFO.

Attachment: pgpcUNOUCpNLM.pgp
Description: PGP signature


[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux