Re: [PATCH][RFC] splice support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 30 2006, Andrew Morton wrote:
> Jens Axboe <[email protected]> wrote:
> >
> > Actually it isn't so bad, how does this look?
> > 
> > ...
> >
> >  @@ -180,30 +181,48 @@ static int __generic_file_splice_read(st
> >   	i = find_get_pages(mapping, index, nr_pages, pages);
> >   
> >   	/*
> >  -	 * If not all pages were in the page-cache, we'll
> >  -	 * just assume that the rest haven't been read in,
> >  -	 * so we'll get the rest locked and start IO on
> >  -	 * them if we can..
> >  +	 * common case - we found all pages, kick it off
> >   	 */
> >  -	while (i < nr_pages) {
> >  -		struct page *page;
> >  -		int error;
> >  -
> >  -		page = find_or_create_page(mapping, index + i, GFP_USER);
> >  -		if (!page)
> >  -			break;
> >  +	if (i == nr_pages)
> >  +		goto splice_them;
> 
> The return value from find_get_pages() is "how many pages did I find" - it
> doesn't tell us whether they were contiguous.

Oh right, so there's still a hole there. The above logic foolishly
thinks that if i == nr_pages, they must be contig. But I can see that
may not be the case. Additionally, I need to init pages[] from i and
forward, since we may be looking at that in the loop.

> How about
> 
> 	if (i && (pages[i - 1]->index == index + i - 1))
> 
> <thinks>
> 
> So if we asked for N pages starting at index=10 and got
> 
> 	[11, 13]
> 
> i == 2
> pages[i-1]->index == 13
> index + i - 1 == 11.
> 
> So I think it's OK.  Yeah, it has to be - any gap at all in the returned
> page array will make pages[i-1]->index too big.

Agree, that should be sound. I'll adjust the code.

> The one-at-a-time logic looks OK from a quick scan.  Do we have logic in
> there to check that we're not overrunning i_size?  (See the pain
> do_generic_mapping_read() goes through).

do_splice_to() checks that, should I move that checking further down in
case the file is truncated?

> argh, readahead.  Really we should be kicking the readahead engine in there
> as well.  That's fairly straightforward - see do_generic_mapping_read().
> 
> Also, the code here _might_ be able to use do_page_cache_readahead() just
> to prepopulate the pages which you know you'll be needing.  There are no
> guarantees that the pages will still be there when you want them of course,
> but it's a decent way of putting a block of pages into a single BIO and
> speeding up the common case.  But if the code is calling
> page_cache_readahead() it won't need to do that.

I'll look into readahead.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux