Aucoin <[email protected]> wrote:
[...]
> The definition of perfectly good here may be up for debate or
> someone can explain it to me. This perfectly good data was
> cached under the tar yet hours after the tar has completed the
> pages are still cached.
That means that there isn't a need for that memory at all (and so they stay
around; why actively delete data (using up resources!) needlessly when it
would be a win to have them around in the (admittedly remote) case they'll
be needed again?), or the whole memory handling in Linux is very broken.
I'd vote for the former, i.e., your problems have nothing to do with memory
pressure and swapping. That would explain why your maneuvres didn't make a
difference...
In any case, how do you know it is the tar data that stays around, and not
just that the number of pages "in use" stays roughly constant?
Please explain again:
- What you are doing, step by step
- What are your exact requirements
- In what exact way is it missbehaving. Please tell /in detail/ how you
determine the real behaviour, not your deductions.
[Yes, I'm in my "dense" day today.]
--
Dr. Horst H. von Brand User #22616 counter.li.org
Departamento de Informatica Fono: +56 32 2654431
Universidad Tecnica Federico Santa Maria +56 32 2654239
Casilla 110-V, Valparaiso, Chile Fax: +56 32 2797513
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]