On Thu, Jul 12, 2007 at 01:14:36PM +0200, Andrea Arcangeli wrote:
> On Thu, Jul 12, 2007 at 10:12:56AM +1000, David Chinner wrote:
> > I need really large filesystems that contain both small and large files to
> > work more efficiently on small boxes where we can't throw endless amounts of
> > RAM and CPUs at the problem. Hence things like 64k page size are just not an
> > option because of the wastage that it entails.
>
> I didn't know you were allocating 4k pages for the small files and 64k
> for the large ones in the same fs. That sounds quite a bit
> overkill.
We already have rudimentary multi-block size support via the
per-inode extent size hint, but we still cache based on the
filesystem block size ('coz we can't increase it).
All I want is to be able to change the index granularity in the page
cache with minimal impact and everything in XFS falls almost
straight out in a pretty optimal manner.
> I still think you should run those systems with PAGE_SIZE 64k even if
> it'll waste you more memory on the small files.
That's crap. Just because a machine has lots of memory does not
make it OK to waste lots of memory.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]