William Lee Irwin III <[email protected]> writes:
> On Thu, 26 Apr 2007, Nick Piggin wrote:
>>> OK, I would like to see them. And also discussions of things like why
>>> we shouldn't increase PAGE_SIZE instead.
>
> On Thu, Apr 26, 2007 at 12:34:50AM -0700, Christoph Lameter wrote:
>> Because 4k is a good page size that is bound to the binary format? Frankly
>> there is no point in having my text files in large page sizes. However,
>> when I read a dvd then I may want to transfer 64k chunks or when use my
>> flash drive I may want to transfer 128k chunks. And yes if a scientific
>> application needs to do data dump then it should be able to use very high
>> page sizes (megabytes, gigabytes) to be able to continue its work while
>> the huge dumps runs at full I/O speed ...
>
> It's possible to divorce PAGE_SIZE from the binary formats, though I
> found it difficult to keep up with the update treadmill.
On x86_64 the sizes is actually 64K for executable binaries if I
recall correctly. It certainly is not PAGE_SIZE, so we have some
flexibility there.
> Maybe it's
> like hch says and I just needed to find more and better API cleanups.
> I've only not tried to resurrect it because it's too much for me to do
> on my own. I essentially collapsed under the weight of it and my 2.5.x
> codebase ended up worse than Katrina as a disaster, which I don't want
> to repeat and think collaborators or a different project lead from
> myself are needed to avoid that happening again.
But we still have some issues with mmap. But since we could increase
PAGE_SIZE on x86_64 and not have to even worry about sub PAGE_SIZE
mmaps. It is being suggested that if people really need larger
physical pages that they just fix PAGE_SIZE. The everything just
works.
Thinking about it changing PAGE_SIZE on x86_64 should be about as
hard as doing the 3-level vs 2-level page table format. We say
we have a different page table format that uses a larger PAGE_SIZE.
All arch code, all code in paths that we expect to change.
Boom all done.
It might be worth implementing just so people can play with different
PAGE_SIZE values for benchmarking.
I don't think the larger physical page size is really the issue here
though.
> It's unclear how much the situation has changed since 32-bit workload
> feasibility issues have been relegated to ignorable or deliberate
> "f**k 32-bit" status. The effect is doubtless to make it easier, though
> to what degree I'm not sure.
Perhaps.
> Anyway, if that's being kicked around as an alternative, it could be
> said that I have some insight into the issues surrounding it.
Partially but also partially they are very much suggesting going down
the same path. Currently mmap doesn't work with order >0 pages because
they are not yet addressing these issues at all.
This looks like a more flexible version of the old PAGE_CACHE_SIZE >
PAGE_SIZE code. Which makes seriously question the whole idea.
Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]