Hi, On Wednesday 19 October 2005 13:10, [email protected] wrote: > Zitat von Andrew Morton <[email protected]>: > > So I'd also suggest a new resource limit which, if set, is copied into the > > applications's file_structs on open(). So you then write a little wrapper > > app which does setrlimit()+exec(): > > > > limit-cache-usage -s 1000 my-fave-backup-program <args> > > > > Which will cause every file which my-fave-backup-program reads or writes to > > be limited to a maximum pagecache residency of 1000 kbytes. > > Or make it another 'ulimit' parameter... Which is already there: There is an ulimit for "maximum RSS", which is at least a superset of "maximum pagecache residency". This is already settable and known by many admins. But AFAIR it is not honoured by the kernel completely, right? But per file is a much better choice, since this would allow concurrent streaming. This is needed to implement timeshifting at least[1]. So either I miss something or this is no proper solution yet. Regards Ingo Oeser [1] Which is obviously done by some kind of on-disk FIFO.
Attachment:
pgpcUNOUCpNLM.pgp
Description: PGP signature
- Follow-Ups:
- Re: large files unnecessary trashing filesystem cache?
- From: Andrew Morton <[email protected]>
- Re: large files unnecessary trashing filesystem cache?
- References:
- large files unnecessary trashing filesystem cache?
- From: Guido Fiala <[email protected]>
- Re: large files unnecessary trashing filesystem cache?
- From: Andrew Morton <[email protected]>
- Re: large files unnecessary trashing filesystem cache?
- From: [email protected]
- large files unnecessary trashing filesystem cache?
- Prev by Date: Re: 2.6.14-rc4-mm1
- Next by Date: Re: [PATCH 0/7] more HPET fixes and enhancements
- Previous by thread: Re: large files unnecessary trashing filesystem cache?
- Next by thread: Re: large files unnecessary trashing filesystem cache?
- Index(es):