Re: large files unnecessary trashing filesystem cache?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday 19 October 2005 00:37, Andrew Morton wrote:
> Andrew James Wade <[email protected]> wrote:
> >
> > Sometimes you want a single file to take up most of the memory; databases
> >  spring to mind. Perhaps files/processes that take up a large proportion of
> >  memory should be penalized by preferentially reclaiming their pages, but
> >  limit the aggressiveness so that they can still take up most of the memory
> >  if sufficiently persistent (and the rest of the system isn't thrashing).
> 
> Yes.  Basically any smart heuristic we apply here will have failure modes. 
> For example, the person whose application does repeated linear reads of the
> first 100MB of a 4G file will get very upset.

As will any dumb heuristic for that matter; we'd need precognition[1] to avoid
all of them. But we can hopefully make the failure modes rarer and more
predictable. I don't know how my proposal would fare, and as I do not have
the code to test the matter I think I shall drop it.

[1] Which could, on occasion, be provided by hinting.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux