Hi wang,

We didn't make unitary tests with testpmd tools, the validation tests have been done
using our DPDK application in the following topology:


+------------------------------+
| +-----------+  +-----------+ |
| | Linux VM1 |  | Linux VM2 | |
| +------+----+  +----+------+ |
|       VMware DvSwitch        |
|     +--+------------+--+     |
|     |  +---OVSbr0---+  |     |
|     |                  |     |
|     |  6WIND DPDK app  |     |
|     +------------------+     |
|      VMware ESXi 6.0/6.5     |
+------------------------------+



All the available offloads are enabled in Linux VM 1 and 2.
Iperf TCP traffic is started from Linux VM1 to Linux VM2.


With ESXi 6.0 (vHW 11), we got the following numbers using 2 cores for our DPDK app:
- with LRO enabled on the DPDK app ports: 21 Gbps
- with LRO disabled on the DPDK app ports: 9 Gbps


With ESXi 6.5 (vHW 13), we got the following numbers using 2 cores for our DPDK app:
- with LRO enabled on the DPDK app ports: 40 Gbps
- with LRO disabled on the DPDK app ports: 20 Gbps


Didier

/*
*/
On 04/13/2018 06:44 AM, Yong Wang wrote:
On 3/28/18, 8:44 AM, "dev on behalf of Didier Pallard" <dev-boun...@dpdk.org on 
behalf of didier.pall...@6wind.com> wrote:

     This patchset fixes several issues found in vmxnet3 driver
     when enabling LRO offload support:
     - Rx offload information are not correctly gathered in
       multisegmented packets, leading to inconsistent
       packet type and Rx offload bits in resulting mbuf
     - MSS recovery from offload information is not done
       thus LRO mbufs do not contain a correct tso_segsz value.
     - MSS value is not propagated by the host on some
       hypervisor versions (6.0 for example)
     - If two small TCP segments are aggregated in a single
       mbuf, an empty segment that only contains offload
       information is appended to this segment, and is
       propagated as is to the application. But if the application
       sends back to the hypervisor a mbuf with an empty
       segment, this mbuf is dropped by the hypervisor.
Didier Pallard (8):
       net: export IPv6 header extensions skip function
       net/vmxnet3: return unknown IPv4 extension len ptype
       net/vmxnet3: gather offload data on first and last segment
       net/vmxnet3: fix Rx offload information in multiseg packets
       net/vmxnet3: complete Rx offloads support
       net/vmxnet3: guess mss if not provided in LRO mode
       net/vmxnet3: ignore emtpy segments in reception
       net/vmxnet3: skip empty segments in transmission
drivers/net/vmxnet3/Makefile | 1 +
      drivers/net/vmxnet3/base/vmxnet3_defs.h |  27 ++++-
      drivers/net/vmxnet3/vmxnet3_ethdev.c    |   2 +
      drivers/net/vmxnet3/vmxnet3_ethdev.h    |   1 +
      drivers/net/vmxnet3/vmxnet3_rxtx.c      | 200 
++++++++++++++++++++++++++------
      lib/librte_net/Makefile                 |   1 +
      lib/librte_net/rte_net.c                |  21 ++--
      lib/librte_net/rte_net.h                |  27 +++++
      lib/librte_net/rte_net_version.map      |   1 +
      9 files changed, 238 insertions(+), 43 deletions(-)
--
     2.11.0
Didier, the changes look good overall. Can you describe how did you test this patch set as well as making sure no regression for non-lro case?


Reply via email to