Re: RFT: updatedb "morning after" problem [was: Re: -mm merge plans for 2.6.23]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 28 Jul 2007, Rene Herman wrote:

On 07/27/2007 09:43 PM, [email protected] wrote:

 On Fri, 27 Jul 2007, Rene Herman wrote:

>  On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
> > > Questions about it:
> >   Q) Does swap-prefetch help with this?
> >   A) [From all reports I've seen (*)]
> > Yes, it does. > > No it does not. If updatedb filled memory to the point of causing > swapping (which noone is reproducing anyway) it HAS FILLED MEMORY and > swap-prefetch hasn't any memory to prefetch into -- updatedb itself > doesn't use any significant memory.

 however there are other programs which are known to take up significant
 amounts of memory and will cause the issue being described (openoffice for
 example)

 please don't get hung up on the text 'updatedb' and accept that there are
 programs that do run intermittently and do use a significant amount of ram
 and then free it.

Different issue. One that's worth pursueing perhaps, but a different issue from the VFS caches issue that people have been trying to track down.

people are trying to track down the problem of their machine being slow until enough data is swapped back in to operate normally.

in at some situations swap prefetch can help becouse something that used memory freed it so there is free memory that could be filled with data (which is something that Linux does agressivly in most other situations)

in some other situations swap prefetch cannot help becouse useless data is getting cached at the expense of useful data.

nobody is arguing that swap prefetch helps in the second cast.

what people are arguing is that there are situations where it helps for the first case. on some machines and version of updatedb the nighly run of updatedb can cause both sets of problems. but the nightly updatedb run is not the only thing that can cause problems


but let's talk about the concept here for a little bit

the design is to use CPU and I/O capacity that's otherwise idle to fill free memory with data from swap.

pro:
  more ram has potentially useful data in it

con:
it takes a little extra effort to give this memory to another app (the page must be removed from the list and zeroed at the time it's needed, I assume that the data is left in swap so that it doesn't have to be written out again)

  it adds some complexity to the kernel (~500 lines IIRC from this thread)

  by undoing recent swapouts it can potentially mask problems with swapout

it looks to me like unless the code was really bad (and after 23 months in -mm it doesn't sound like it is) that the only significant con left is the potential to mask other problems.

however there are many legitimate cases where it is definantly dong the right thing (swapout was correct in pushing out the pages, but now the cause of that preasure is gone). the amount of benifit from this will vary from situation to situation, but it's not reasonable to claim that this provides no benifit (you have benchmark numbers that show it in synthetic benchmarks, and you have user reports that show it in the real-worlk)

there are lots of things in the kernel who's job is to pre-fill the memroy with data that may (or may not) be useful in the future. this is just another method of filling the cache. it does so my saying "the user wanted these pages in the recent past, so it's a reasonable guess to say that the user will want them again in the future"

David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux