Christoph Lameter wrote:
On Thu, 26 Apr 2007, Nick Piggin wrote:
No I don't want to add another fs layer.
Well maybe you could explain what you want. Preferably without redefining
the established terms?
Support for larger buffers than page cache pages.
I still don't think anti fragmentation or defragmentation are a good
approach, when you consider the alternatives.
I have not heard of any alternatives in this discussion here. Just the old
line of lets tune the VM here and there and hope it lasts a while longer.
I didn't realise that one was even in the running. How can you "tune" the
VM to handle bigger block sizes?
OK, I would like to see them. And also discussions of things like why
we shouldn't increase PAGE_SIZE instead.
Because 4k is a good page size that is bound to the binary format? Frankly
there is no point in having my text files in large page sizes. However,
when I read a dvd then I may want to transfer 64k chunks or when use my
flash drive I may want to transfer 128k chunks. And yes if a scientific
application needs to do data dump then it should be able to use very high
page sizes (megabytes, gigabytes) to be able to continue its work while
the huge dumps runs at full I/O speed ...
So block size > page cache size... also, you should obviously be using
hardware that is tuned to work well with 4K pages, because surely there
is lots of that around.
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]