Andi Kleen wrote:
TSO is beneficial for the software again. The linux code currently
takes several locks and does quite a few function calls for each
packet and using larger packets lowers this overhead. At least with
10GbE saving CPU cycles is still quite important.
Some quick netperf TCP_RR tests between a pair of dual-core rx6600's running
2.6.23-rc3. the NICs are dual-core e1000's connected back-to-back with the
interrupt throttle disabled. I like using TCP_RR to tickle path-length
questions because it rarely runs into bandwidth limitations regardless of the
link-type.
First, with TSO enabled on both sides, then with it disabled, netperf/netserver
bound to the same CPU as takes interrupts, which is the "best" place to be for a
TCP_RR test (although not always for a TCP_STREAM test...):
:~# netperf -T 1 -t TCP_RR -H 192.168.2.105 -I 99,1 -c -C
TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.105
(192.168.2.105) port 0 AF_INET : +/-0.5% @ 99% conf. : first burst 0 : cpu bind
!!! WARNING
!!! Desired confidence was not achieved within the specified iterations.
!!! This implies that there was variability in the test environment that
!!! must be investigated before going further.
!!! Confidence intervals: Throughput : 0.3%
!!! Local CPU util : 39.3%
!!! Remote CPU util : 40.6%
Local /Remote
Socket Size Request Resp. Elapsed Trans. CPU CPU S.dem S.dem
Send Recv Size Size Time Rate local remote local remote
bytes bytes bytes bytes secs. per sec % S % S us/Tr us/Tr
16384 87380 1 1 10.01 18611.32 20.96 22.35 22.522 24.017
16384 87380
:~# ethtool -K eth2 tso off
e1000: eth2: e1000_set_tso: TSO is Disabled
:~# netperf -T 1 -t TCP_RR -H 192.168.2.105 -I 99,1 -c -C
TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.105
(192.168.2.105) port 0 AF_INET : +/-0.5% @ 99% conf. : first burst 0 : cpu bind
!!! WARNING
!!! Desired confidence was not achieved within the specified iterations.
!!! This implies that there was variability in the test environment that
!!! must be investigated before going further.
!!! Confidence intervals: Throughput : 0.4%
!!! Local CPU util : 21.0%
!!! Remote CPU util : 25.2%
Local /Remote
Socket Size Request Resp. Elapsed Trans. CPU CPU S.dem S.dem
Send Recv Size Size Time Rate local remote local remote
bytes bytes bytes bytes secs. per sec % S % S us/Tr us/Tr
16384 87380 1 1 10.01 19812.51 17.81 17.19 17.983 17.358
16384 87380
While the confidence intervals for CPU util weren't hit, I suspect the
differences in service demand were still real. On throughput we are talking
about +/- 0.2%, for CPU util we are talking about +/- 20% (percent not
percentage points) in the first test and 12.5% in the second.
So, in broad handwaving terms, TSO increased the per-transaction service demand
by something along the lines of (23.27 - 17.67)/17.67 or ~30% and the
transaction rate decreased by ~6%.
rick jones
bitrate blindless is a constant concern
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]