Re: [00/17] Large Blocksize Support V3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 26 Apr 2007, Nick Piggin wrote:

> > > I am working now and again on some code to do this, it is a big job but
> > > I think it is the right way to do it. But it would take a long time to
> > > get stable and supported by filesystems...
> > Ummm... We already have a radix tree for this???? What more is needed? You
> > just need to go through all filesystems and make them use extends.
> 
> I'm talking about block size > page size in the buffer layer.

I fail to see the point of adding another layer when you already have a 
mapping through the radix tree. You just need to change the way the 
filesystem looks up pages.

What are the exact requirement you are trying to address?

You fundamentally cannot address the large blocksize requirements with 4k 
pages since you simply must have larger contiguous memory.

Large blocksize means that the device can do I/O on blocks of that size.

What can be done is to create some kind of fake linearity. At one level 
the radix tree and the address space already provide that. The radix tree 
allows you to find the next page etc. Another approach would be to create 
a virtual address space that fakes linearity even for the processor.

Then there are ways with I/O mmus to avoid the issues again.

However, you still have not addressed the underlying problem of the device 
not being able to do I/O to a larger block of memory.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux