On Fri, May 04, 2007 at 07:33:54AM -0600, Eric W. Biederman wrote:
> >
> > So while the jury is out about how many other filesystems might use
> > it, I suspect it's more than you might think. At the very least,
> > there may be some IA64 users who might be trying to transition their
> > way to x86_64, and have existing filesystems using a 8k or 16k
> > block filesystems. :-)
>
> How much of a problem would it be if those blocks were not necessarily
> contiguous in RAM, but placed in normal 4K pages in the page cache?
If you need to treat the block in a contiguous range, then you need to
vmap() the discontiguous pages. That has substantial overhead if you
have to do it regularly.
We do this in xfs_buf.c for > page size blocks - the overhead that
caused when operating on inode clusters resulted in us doing some
pointer fiddling and directly addresing the contents of each page
to avoid the vmap overhead. See xfs_buf_offset() and friends....
> I expect meta data operations would have to be modified but that otherwise
> you would not care.
I think you might need to modify the copy-in and copy-out operations
substantially (e.g. prepare_/commit_write()) as they assume a buffer doesn't
span multple pages.....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]