On Sat, Jul 02, 2016 at 09:00:44AM -0600, Amru OM wrote: > I have installed the latest OVS 2.5 in each physical server of my testbed. > The physical servers running CentOS Linux release 7.2.1511 as the host > operating system. Also, each host has one VM running via KVM. The VM in > each server is connected to OVS through vNIC (e.g., tap0). Here is the > command line that I used to configure and start-up the VMs > > > > *qemu-kvm -m 2048 -netdev > tap,id=hostnet0,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,vhost=on > -device virtio-net-pci, > netdev=hostnet0,id=net0,mac=52:54:00:af:87:57,bus=pci.0,addr=0x3 -drive > file=/var/lib/libvirt/images/fedora.img -cpu host -smp cores=1,threads=2* > > > > the OVS (in each server) and the host NIC are configured as follows: > > > > *ovs-vsctl add-br br0* > > *ovs-vsctl add-port br0 eth0* > > *ifconfig eth0 0 up* > > *ifconfig br0 10.0.0.x netmask 255.0.0.0 up* > > > > eth0 above-mentioned is the physical NIC in the host and its link-speed is > 56 Gbits/sec. > > After configuring and running OVS and VM in each host, I tested the > bandwidth between two VMs located in different hosts via *Iperf*. When I > test the UDP bandwidth, where packets have large payload ( >= 10KB), the > test results show high packet loss rate. For example, when I run *Iperf* as > follows: > > > > *Iperf -u -c 10.0.0.x -l 10K -b 56G* > > > > The test result of UDP bandwidth is 2.88 Gbits/sec, and packet loss rate ~ > 72%. This poor performance only occurs with UDP, but not with TCP. I just > wonder if I’m missing something here.
What's the loss rate for this setting when OVS isn't involved? _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev