This is Mellanox's roadmap for DPDK20.02, which we are working on currently.



Enable zero copy to/from GPU, Storage-device etc.

* Enable a pinned external buffer with pktmbuf pool:

* Introducing a new flag on the rte_pktmbuf_pool_private to specify this 
mempool is for mbuf with a pinned external buffer.

* This will enable a GPU or a storage device to do zero-copy for the received 
frames.



Preserve DSCP field for IPv4/IPv6 decapsulation

* Introduce a new rte_flow API to set DSCP field for IPv4 and IPv6 during 
decapsulation

   In case of an overlay network, when doing decapsulation, the DSCP field may 
need to be updated accordingly to preserve the IP Precedence



Additions to mlx5 PMD (ConnectX-5 SmartNIC, BlueField IPU and above):

* Support multiple header modifications in a single flow rule

   With this, a single flow can have several IPv6 header modification actions

* HW offload for a finer granularity of RSS only on the source or only on the 
destination, for both L3 and L4

   For example, a GW applications where two sides of the flows will be handled 
by the same core

* HW offload for matching on a GTP-U header, specifically on the msg_type and 
teid fields

   With this, a classification for a 4G/5G barrier can be done

* Support PMD hint not to inline packet

   This is in order to support a mixed traffic pattern, where some buffers are 
from the local host memory and others from other devices.



Reduce memory consumption in mlx5 PMD

* Change the implementation of rte_eth_dev_stop()/rte_eth_dev_start() which 
currently caches rules, to a non-cached implementation freeing all software and 
hardware resources for the created flows.



Support the full feature-set of ConnectX-5, including full functionality of hw 
offloads and performance, in ConnectX-6 Dx



Behavior change on rte_flow encap/decap actions

* ECN field will always be copied from the inner frame to the outer frame on 
enacap, and vise-versa on decap

   This is important to easily support congestion control algorithms that 
validate the ECN bit.

   One example is RoCE congestion control.



Introducing a new mlx5 PMD for vDPA (ConnectX-6 Dx, BlueField IPU and above):

* Adding a new mlx5 PMD to support vHost Data Path Acceleration (vDPA) - 
mlx5_vdpa

* mlx5_vdpa can run on top of PCI devices - VFs or PF

* According to the PCI device devargs, specified by the user, the driver's 
probe function will choose either mlx5 or mlx5_vdpa


Reply via email to