> -----Original Message----- > From: Andrey Korolyov [mailto:andrey at xdel.ru] > Sent: Monday, February 2, 2015 10:53 AM > To: dev at dpdk.org > Cc: discuss at openvswitch.org; Traynor, Kevin > Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0 > > On Thu, Jan 22, 2015 at 8:11 PM, Andrey Korolyov <andrey at xdel.ru> wrote: > > On Wed, Jan 21, 2015 at 8:02 PM, Andrey Korolyov <andrey at xdel.ru> wrote: > >> Hello, > >> > >> I observed that the latest OVS with dpdk-1.8.0 and igb_uio starts to > >> drop packets earlier than a regular Linux ixgbe 10G interface, setup > >> follows: > >> > >> receiver/forwarder: > >> - 8 core/2 head system with E5-2603v2, cores 1-3 are given to OVS > >> exclusively > >> - n-dpdk-rxqs=6, rx scattering is not enabled > >> - x520 da > >> - 3.10/3.18 host kernel > >> - during 'legacy mode' testing, queue interrupts are scattered through all > >> cores > >> > >> sender: > >> - 16-core E52630, netmap framework for packet generation > >> - pkt-gen -f tx -i eth2 -s 10.6.9.0-10.6.9.255 -d > >> 10.6.10.0-10.6.10.255 -S 90:e2:ba:84:19:a0 -D 90:e2:ba:85:06:07 -R > >> 11000000, results in 11Mpps 60-byte packet flood, there are constant > >> values during test. > >> > >> OVS contains only single drop rule at the moment: > >> ovs-ofctl add-flow br0 in_port=1,actions=DROP > >> > >> Packet generator was launched for tens of seconds for both Linux stack > >> and OVS+DPDK cases, resulting in zero drop/error count on the > >> interface in first, along with same counter values on pktgen and host > >> interface stat (means that the none of generated packets are > >> unaccounted). > >> > >> I selected rate for about 11M because OVS starts to do packet drop > >> around this value, after same short test interface stat shows > >> following: > >> > >> statistics : {collisions=0, rx_bytes=22003928768, > >> rx_crc_err=0, rx_dropped=0, rx_errors=10694693, rx_frame_err=0, > >> rx_over_err=0, rx_packets=343811387, tx_bytes=0, tx_dropped=0, > >> tx_errors=0, tx_packets=0} > >> > >> pktgen side: > >> Sent 354506080 packets, 60 bytes each, in 32.23 seconds. > >> Speed: 11.00 Mpps Bandwidth: 5.28 Gbps (raw 7.39 Gbps) > >> > >> If rate will be increased up to 13-14Mpps, the relative error/overall > >> ratio will rise up to a one third. So far OVS on dpdk shows perfect > >> results and I do not want to reject this solution due to exhaustive > >> behavior like described one, so I`m open for any suggestions to > >> improve the situation (except using 1.7 branch :) ). > > > > At a glance it looks like there is a problem with pmd threads, as they > > starting to consume about five thousandth of sys% on a dedicated cores > > during flood but in theory they should not. Any ideas for > > debugging/improving this situation are very welcomed! > > Over the time from a last message I tried a couple of different > configurations, but packet loss starting to happen as early as at > 7-8Mpps. Looks like that the bulk processing which has been in > OVS-DPDK distro is missing from series of patches > (http://openvswitch.org/pipermail/dev/2014-December/049722.html, > http://openvswitch.org/pipermail/dev/2014-December/049723.html). > Before implementing this, I would like to know if there can be any > obvious (not for me unfortunately) clues on this performance issue.
These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to? By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked patch doesn't change this, just the DPDK version. Main things to consider are to isocpu's, pin the pmd thread and keep everything on 1 NUMA socket. At 11 mpps without packet loss on that processor I suspect you are doing those things already. > > Thanks!