So I have been able to profile performance with ovs-dpdk on multi-node and 
single node servers.  However, when I scale up the number of VMs on the compute 
nodes, I see a corresponding drop in individual VM performance.

I launched ovs-dpdk with a single GB of memory on the first numa node 
(instructions seem to indicate memory should only be allocated to the first 
numa node... not sure why).  When I tried increasing the number of RX queues 
(ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=2), performance got 
significantly worse.  I was wondering what is the proper way to scale up the 
performance of ovs-dpdk when scaling up the number of VMs that each are using 
significant network bandwidth?

On a related note, it seems with the default config of ovs-dpdk that it was 
capped at around 6 Gb/s (with 1500 byte packets).  It seemed to be an 
artificial limit as looking at the dpdk stats of where it was spending time, 
the majority of the time was in the idle/poll loop rather than the processing 
packets loop.  I thought perhaps that it was the VMs, so I had two instances 
splitting the traffic, and again the aggregate b/w seemed to be capped at 6 
Gb/s with each pair of VMs doing 3 Gb/s.  Is this an artificial limit?  Or is 
it related to performance tuning I might have to do?

Thanks for any recommendations!
Gabriel Black


_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to