Rick and others,

I have installed netperf from the top of trunk and build it with the --enable-demo option as suggested previously.

This is the output for a test performed on two VMs located at the same hypervisor:


# netperf -t TCP_STREAM -H 10.0.0.6 -D 1.0 -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.6 () port 0 AF_INET : demo Interim result: 2241.61 10^6bits/s over 1.016 seconds ending at 1418767226.713 Interim result: 1930.81 10^6bits/s over 1.161 seconds ending at 1418767227.874 Interim result: 2382.14 10^6bits/s over 1.001 seconds ending at 1418767228.875 Interim result: 2349.73 10^6bits/s over 1.014 seconds ending at 1418767229.889 Interim result: 2434.23 10^6bits/s over 1.002 seconds ending at 1418767230.891 Interim result: 2346.76 10^6bits/s over 1.037 seconds ending at 1418767231.928 Interim result: 2255.50 10^6bits/s over 1.040 seconds ending at 1418767232.969 Interim result: 2315.00 10^6bits/s over 1.038 seconds ending at 1418767234.006 Interim result: 2263.04 10^6bits/s over 1.023 seconds ending at 1418767235.029 Interim result: 2126.80 10^6bits/s over 1.064 seconds ending at 1418767236.093 Interim result: 2203.87 10^6bits/s over 1.001 seconds ending at 1418767237.095 Interim result: 2321.52 10^6bits/s over 1.000 seconds ending at 1418767238.095 Interim result: 1846.11 10^6bits/s over 1.258 seconds ending at 1418767239.353 Interim result: 1948.17 10^6bits/s over 1.000 seconds ending at 1418767240.353 Interim result: 2429.19 10^6bits/s over 1.000 seconds ending at 1418767241.353 Interim result: 2503.26 10^6bits/s over 1.002 seconds ending at 1418767242.355 Interim result: 2503.27 10^6bits/s over 1.000 seconds ending at 1418767243.355 Interim result: 2099.87 10^6bits/s over 1.192 seconds ending at 1418767244.547 Interim result: 1591.18 10^6bits/s over 1.320 seconds ending at 1418767245.867 Interim result: 2410.69 10^6bits/s over 1.002 seconds ending at 1418767246.868 Interim result: 2478.45 10^6bits/s over 1.000 seconds ending at 1418767247.869 Interim result: 2468.05 10^6bits/s over 1.004 seconds ending at 1418767248.873 Interim result: 2058.11 10^6bits/s over 1.199 seconds ending at 1418767250.072 Interim result: 2318.82 10^6bits/s over 1.002 seconds ending at 1418767251.074 Interim result: 2452.13 10^6bits/s over 1.001 seconds ending at 1418767252.075 Interim result: 2389.36 10^6bits/s over 1.026 seconds ending at 1418767253.102 Interim result: 2474.32 10^6bits/s over 1.000 seconds ending at 1418767254.102 Interim result: 2298.98 10^6bits/s over 1.076 seconds ending at 1418767255.178 Interim result: 2149.12 10^6bits/s over 0.519 seconds ending at 1418767255.697
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    30.00    2248.81


The test between VMs on different nodes produces the following:

# netperf -t TCP_STREAM -H 10.0.0.7 -D 1.0 -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.7 () port 0 AF_INET : demo Interim result: 701.81 10^6bits/s over 1.001 seconds ending at 1418767743.121 Interim result: 741.16 10^6bits/s over 1.001 seconds ending at 1418767744.122 Interim result: 677.68 10^6bits/s over 1.094 seconds ending at 1418767745.216 Interim result: 687.66 10^6bits/s over 1.004 seconds ending at 1418767746.220 Interim result: 680.56 10^6bits/s over 1.011 seconds ending at 1418767747.231 Interim result: 676.75 10^6bits/s over 1.006 seconds ending at 1418767748.236 Interim result: 726.85 10^6bits/s over 1.007 seconds ending at 1418767749.243 Interim result: 654.37 10^6bits/s over 1.111 seconds ending at 1418767750.354 Interim result: 683.19 10^6bits/s over 1.005 seconds ending at 1418767751.359 Interim result: 712.31 10^6bits/s over 1.003 seconds ending at 1418767752.362 Interim result: 687.51 10^6bits/s over 1.036 seconds ending at 1418767753.398 Interim result: 719.50 10^6bits/s over 1.004 seconds ending at 1418767754.402 Interim result: 699.09 10^6bits/s over 1.029 seconds ending at 1418767755.432 Interim result: 699.32 10^6bits/s over 1.004 seconds ending at 1418767756.436 Interim result: 709.26 10^6bits/s over 1.005 seconds ending at 1418767757.441 Interim result: 720.16 10^6bits/s over 1.003 seconds ending at 1418767758.444 Interim result: 719.09 10^6bits/s over 1.003 seconds ending at 1418767759.447 Interim result: 731.69 10^6bits/s over 1.008 seconds ending at 1418767760.456 Interim result: 731.18 10^6bits/s over 1.001 seconds ending at 1418767761.456 Interim result: 726.45 10^6bits/s over 1.007 seconds ending at 1418767762.463 Interim result: 721.67 10^6bits/s over 1.007 seconds ending at 1418767763.469 Interim result: 673.96 10^6bits/s over 1.071 seconds ending at 1418767764.540 Interim result: 671.50 10^6bits/s over 1.006 seconds ending at 1418767765.546 Interim result: 703.04 10^6bits/s over 1.001 seconds ending at 1418767766.547 Interim result: 707.66 10^6bits/s over 1.007 seconds ending at 1418767767.554 Interim result: 686.38 10^6bits/s over 1.031 seconds ending at 1418767768.585 Interim result: 697.72 10^6bits/s over 1.005 seconds ending at 1418767769.590 Interim result: 709.24 10^6bits/s over 1.000 seconds ending at 1418767770.590 Interim result: 720.89 10^6bits/s over 1.000 seconds ending at 1418767771.590 Interim result: 727.09 10^6bits/s over 0.530 seconds ending at 1418767772.120
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    30.02     702.39






Both results show a really high bandwidth but I can't get that speed for file transfers.


Regards,


George



Read the names carefully again :)
I was suggesting what I used to do in the past when on a new OpenStack
install I had this problem.

Indeed, I got the names crossed.  Anyway, running netperf is
worthwhile even in a Neutron environment. Run it early. Run it often
:)

rick


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to