Hello, All.

I've faced with a really bad situation with packet drops on a small
packet rates (~45 Kpps) while using XL710 NIC with i40e DPDK driver.

The issue was found while testing PHY-VM-PHY scenario with OVS and
confirmed on PHY-PHY scenario with testpmd.

DPDK version 16.07 was used in all cases.
XL710 firmware-version: f5.0.40043 a1.5 n5.04 e2505

Test description (PHY-PHY):

        * Following cmdline was used:

            # n_desc=2048
            # ./testpmd -c 0xf -n 2 --socket-mem=8192,0 -w 0000:05:00.0 -v \
                        -- --burst=32 --txd=${n_desc} --rxd=${n_desc} \
                        --rxq=1 --txq=1 --nb-cores=1 \
                        --eth-peer=0,a0:00:00:00:00:00 --forward-mode=mac

        * DPDK-Pktgen application was used as a traffic generator.
          Single flow generated.

Results:

        * Packet size: 128B, rate: 90% of 10Gbps (~7.5 Mpps):

          On the generator's side:

          Total counts:
                Tx    :      759034368 packets
                Rx    :      759033239 packets
                Lost  :           1129 packets

          Average rates:
                Tx    :        7590344 pps
                Rx    :        7590332 pps
                Lost  :             11 pps

          All of this dropped packets are RX-dropped on testpmd's side:

          +++++++++++++++ Accumulated forward statistics for all 
ports+++++++++++++++
          RX-packets: 759033239      RX-dropped: 1129          RX-total: 
759034368
          TX-packets: 759033239      TX-dropped: 0             TX-total: 
759033239
          
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

          At the same time 10G NIC with IXGBE driver works perfectly
          without any packet drops in the same scenario.

Much worse situation with PHY-VM-PHY scenario with OVS:

        * testpmd application used inside guest to forward incoming packets.
          (almost same cmdline as for PHY-PHY)

        * For packet size 256 B on rate 1% of 10Gbps (~45 Kpps):

          Total counts:
                Tx    :        1358112 packets
                Rx    :        1357990 packets
                Lost  :            122 packets

          Average rates:
                Tx    :          45270 pps
                Rx    :          45266 pps
                Lost  :              4 pps

          All of this 122 dropped packets can be found in rx_dropped counter:

            # ovs-vsctl get interface dpdk0 statistics:rx_dropped
            122

         And again, no issues with IXGBE on the exactly same scenario.


Results of my investigation:

        * I found that all of this packets are 'imissed'. This means that rx
          descriptor ring was overflowed.

        * I've modified i40e driver to check the real number of free descriptors
          that was not still filled by the NIC and found that HW fills
          rx descriptors with uneven rate. Looks like it fills them using
          a huge batches.

        * So, root cause of packet drops with XL710 is somehow uneven rate of
          filling of the hw rx descriptors by the NIC. This leads to exhausting
          of rx descriptors and packet drops by the hardware. 10G IXGBE NIC 
works
          more smoothly and driver is able to refill hw ring with rx descriptors
          in time.

        * The issue becomes worse with OVS because of much bigger latencies
          between 'rte_eth_rx_burst()' calls.

The easiest solution for this problem is to increase number of RX descriptors.
Increasing up to 4096 eliminates packet drops but decreases the performance a 
lot:

        For OVS PHY-VM-PHY scenario by 10%
        For OVS PHY-PHY scenario by 20%
        For tespmd PHY-PHY scenario by 17% (22.1 Mpps --> 18.2 Mpps for 64B 
packets)

As a result we have a trade-off between zero drop rate on small packet rates and
the higher maximum performance that is very sad.

Using of 16B descriptors doesn't really help with performance.
Upgrading the firmware from version 4.4 to 5.04 didn't help with drops.

Any thoughts? Can anyone reproduce this?

Best regards, Ilya Maximets.

Reply via email to