On Fri, Dec 22 2006, saeed bishara wrote:
> On 12/22/06, Jens Axboe <[email protected]> wrote:
> >On Fri, Dec 22 2006, saeed bishara wrote:
> >> On 12/22/06, Jens Axboe <[email protected]> wrote:
> >> >On Thu, Dec 21 2006, saeed bishara wrote:
> >> >> Hi,
> >> >> I'm trying to use the splice/vmsplice system calls to improve the
> >> >> samba server write throughput, but before touching the smbd, I started
> >> >> to improve the ttcp tool since it simple and has the same flow. I'm
> >> >> expecting to avoid the "copy_from_user" path when using those
> >> >> syscalls.
> >> >> so far, I couldn't make any improvement, actually the throughput get
> >> >> worst. the new receive flow looks like this (code also attached):
> >> >> 1. read tcp packet (64 pages) to page aligned buffer.
> >> >> 2. vmsplice the buffer to pipe with SPLICE_F_MOVE.
> >> >> 3. splice the pipe to the file, also with SPLICE_F_MOVE.
> >> >>
> >> >> the strace shows that the splice takes a lot of time. also when
> >> >> profiling the kernel, I found that the memcpy() called to often !!
> >> >
> >> >(didn't see this until now, [email protected] doesn't work anymore)
> >> >
> >> >I'm assuming that you mean you vmsplice with SPLICE_F_GIFT, to hand
> >> >ownership of the pages to the kernel (in which case SPLICE_F_MOVE will
> >> >work, otherwise you get a copy)? If not, that'll surely cost you a data
> >> >copy
> >> I'll try the vmplice with SPLICE_F_GIFT and splice with MOVE. btw,
> >> I noticed that the splice system call takes the bulk of the time,
> >> does it mean anything?
> >
> >Hard to say without seeing some numbers :-)
> I'm out of the office, I'll send it later. btw, my test bed ( the
> receiver side ) is arm9. does it matter?
The vmsplice is basically vm intensive, so it could matter.
> >> >This sounds remarkably like a recent thread on lkml, you may want to
> >> >read up on that. Basically using splice for network receive is a bit of
> >> >a work-around now, since you do need the one copy and then vmsplice that
> >> >into a pipe. To realize the full potential of splice, we first need
> >> >socket receive support so you can skip that step (splice from socket to
> >> >pipe, splice pipe to file).
> >> Ashwini Kulkarni posted patches that implements that, see
> >> http://lkml.org/lkml/2006/9/20/272 . is that right?
> >> >
> >> >There was no test code attached, btw.
> >> sorry, here it is.
> >> can you please add sample application to your test tools (splice,fio
> >> ,,) that demonstrates my flow; socket to file using read & vmsplice?
> >
> >I didn't add such an example, since I had hoped that we would have
> >splice from socket support sooner rather than later. But I can do so, of
> >course.
> do you any preliminary patches? I can start playing with it.
I don't, Intel posted a set of patches a few months ago though. I didn't
have time to look that at the time being, but you should be able to find
them in the archives.
> >I'll try your test. One thing that sticks out initially is that you
> >should be using full pages, the splice pipe will not merge page
> >segments. So don't use a buflen less than the page size.
>
> yes, actually I run the ttcp with -l65536 ( 64KB ), and the buffer is
> always page aligned.also, the splice/vmsplice with MOVE or GIFT will
> fail if the buffer is not a whole pages. am I rigth?
Yes.
I added a simple splice-fromnet example in the splice git repo, see if
you can repeat your results with that. Doing:
# ./splice-fromnet -g 2001 | ./splice-out -m /dev/null
and
# cat /dev/zero | netcat localhost 2001
gets me about 490MiB/sec, using a recv/write loop is around 413MiB/sec.
Not migrating pages gets me around 422MiB/sec.
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]