Hi, How can I verify that a 1Gb/s network is indeed operating at its optimal speed? I tried this: [master]$ ping -s 65507 node 65515 bytes from node: icmp_seq=0 ttl=64 time=1.97 ms 65515 bytes from node: icmp_seq=1 ttl=64 time=1.95 ms 65515 bytes from node: icmp_seq=2 ttl=64 time=1.94 ms 65515 bytes from node: icmp_seq=3 ttl=64 time=1.97 ms (I tried many times, over a long period of time to get these typical values). >From this I conclude that it takes about 1.95 ms for 65515 x 8 bits to go forth and back between master and node. Ideally, on a 1Gbit/s network, the time should be: 65515 x 8 x 2 / (1024^3) = 0.98 ms (x 2 for the roundtrip signal forth and back and 1024^3 is the 1G of the network) May I now conclude that the real-time is about two times the ideal-time? I wonder if this indicates a problem of the network? And is this a proper test of this Gbit/s network? Thanks, Rob. PS: I verified my calculation method for two computers here on a 100Mbit/s network, from which I get: time with ping: 12.4 ms ideal calculated time: 10 ms which is an acceptable difference. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com