> -----Original Message-----
> From: Traynor, Kevin [mailto:kevin.tray...@intel.com]
> 
> > -----Original Message-----
> > From: discuss [mailto:discuss-boun...@openvswitch.org] On Behalf Of
> > Gabe Black


> > I launched ovs-dpdk with a single GB of memory on the first numa node
> > (instructions seem to indicate memory should only be allocated to the
> > first numa node... not sure why).  When I tried increasing the number
> > of RX queues (ovs-vsctl set Open_vSwitch .
> > other_config:n-dpdk-rxqs=2), performance got significantly worse.  I
> > was wondering what is the proper way to scale up the performance of
> > ovs-dpdk when scaling up the number of VMs that each are using
> significant network bandwidth?
> 
> Probably the best way to scale is to add more cores with pmd-cpu-mask.
> You won't get much by adding multiple rxqs unless you do this, as the same
> pmd (core) will be polling all the rxqs.
> 

Thanks for the recommendation.  I probably should have mentioned that in 
addition to having more rxqs, I also set the cpu mask to have ovs only run on 
threads (that were not hyperthreaded siblings) on the same socket/numa node.  
That still resulted in worse performance.  However that could have been because 
I was running on a dual quad-core system on that particular node, and keeping 
everything on a single numa node exhausted resources that would have normally 
been available for the VMs.

Presumably though, one would have VMs using memory and compute resources from 
multiple numa nodes.  With that in mind should I be launching ovs-vswitchd with 
(for example) --socket-mem 1024,1024 (and also ensure hugetable memory is 
available on both numa nodes)?  Should I have the cpu mask also pick cores from 
both sockets?  The documentation wasn't clear if this should be the route to go 
to support multi numa node setups, but had conflicting messages.  It claimed 
that it would create a pmd to run for each numa node by default, but I found 
that it only created a single thread, and it ran on numa 0 (and cpu 0).  The 
docs also seem to indicate that memory should be allocated from numa 0 as well 
and that ovs-vswitchd should be launched with that specification...  Is that 
supposed to be the case?

> I submitted a patch with additional information about performance tuning,
> hopefully that will help explain.
> http://openvswitch.org/pipermail/dev/2015-September/059806.html
 
Thank you for the additional tips.  One item I think that could be added and I 
haven't been able to find, is determining the location of the occasional packet 
drops.  In my testing, there is often very little packet drop, but no 
application is taking credit (blame?) for the drop.  DPDK stats show no drops 
(ovs-appctl dpif-netdev/pmd-stats-show), the vm interface shows no drops, 
application claims no drops (using zsend/zcount).   Ovs-ofctl dump-ports 
doesn't claim any...  ethtool on the physical nic won't work since dpdk uio 
owns the nic, so I can't see those stats... And that about exhausts my methods 
for figuring out where to look for drops...
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to