> > Every stackable file system caches the data at its own level and
> > copies it from/to the lower file system's cached pages when necessary.
> > ...
> > this effectively reduces the system's cache memory size by two or more
> > times.
>
> It should not be that bad with a decent cache replacement policy; I
> wonder if observing the problem (that you corrected in the various ways
> you've described), you got some insight as to what exactly was happening.
I agree that appropriate replacement policies can partially eliminate
the double caching problem for stackable file systems. In fact, that's
exactly what RAIF does: it forces the data pages of the lower file
systems to be evicted right after they are written and are not needed
anymore. This solves the problem for most write-intensive workloads.
Without this optimization the situation is much worse because Linux is
trying to protect caches of different file systems from each other. But,
as you mentioned, any cache replacement policy is optimized for some set
of workloads and is bad for some other set of workloads. Also, caching
the data at multiple layers not just increases the memory consumption but
also adds CPU time overheads because of the data copying between the
pages. I believe that the real solution to the problem is the ability to
share data pages between file systems.
Nikolai.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]