On Thu, 26 Apr 2007, Nick Piggin wrote:
> Christoph Lameter wrote:
> > On Thu, 26 Apr 2007, Nick Piggin wrote:
> >
> >
> > > But I maintain that the end result is better than the fragmentation
> > > based approach. A lot of people don't actually want a bigger page
> > > cache size, because they want efficient internal fragmentation as
> > > well, so your radix-tree based approach isn't really comparable.
> >
> >
> > Me? Radix tree based approach? That approach is in the kernel. Do not create
> > a solution where there is no problem. If we do not want to support large
> > blocksizes then lets be honest and say so instead of redefining what a block
> > is. The current approach is fine if one is satisfied with scatter gather and
> > the VM overhead coming with handling these pages. I fail to see what any of
> > what you are proposing would add to that.
>
> I'm not just making this up. Fragmentation. OK?
Yes you are. If you want to avoid fragmentation by restricting the OS to
4k alone then the radix tree is sufficient to establish the order of pages
in a mapping. The only problem is to get an array of pointers to a
sequence of pages together by reading through the radix tree. I do not
know what else would be needed.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]