I've been working with tcp_recvmsg trying to implement a new feature. While
implementing this and benchmarking, I found something rather odd. I've
created a benchmark which connects to a server 100,000 times receiving two
10,000 byte messages each time. I used the time program to find out how much
system time was being used during the receives. The first data shows the
statistics on a normal receive call:
time ./client 10000 10000 100000 1 500000000 recv
real 1m27.301s
user 0m1.568s
sys 0m13.464s
I then disabled the prequeue mechanism by changing net/ipv4/tcp.c:1347 of
2.6.11:
if (tp->ucopy.task == user_recv) {
to
if (0 && tp->ucopy.task == user_recv) {
The same benchmark then yielded:
time ./client 10000 10000 100000 1 500000000 recv
real 1m21.928s
user 0m1.579s
sys 0m8.330ss
Note the decreases in the system and real times. These numbers are fairly
stable through 10 consecutive benchmarks of each. If I change message sizes
and number of connections, the difference can narrow or widen, but usually
the non-prequeue beats the prequeue with respect to system and real time.
It might be that I've just found an instance where the prequeue is slower
than the "slow" path. I'm not quite sure why this would be. Does anyone have
any thoughts on this?
Thanks
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]