On Thu, 26 Apr 2007, Nick Piggin wrote:
> But I maintain that the end result is better than the fragmentation
> based approach. A lot of people don't actually want a bigger page
> cache size, because they want efficient internal fragmentation as
> well, so your radix-tree based approach isn't really comparable.
Me? Radix tree based approach? That approach is in the kernel. Do not
create a solution where there is no problem. If we do not want to
support large blocksizes then lets be honest and say so instead of
redefining what a block is. The current approach is fine if one is
satisfied with scatter gather and the VM overhead coming with handling
these pages. I fail to see what any of what you are proposing would add to
that.
Lets be clear here: A bigger page cache size if its just one is not
useful. 4k page size is a good size for many files on the system and
chaning it would break the binary format.. I just do not want it to be the
only one because different usage scenarios may require differnet page
sizes for optimal application performance.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]