Hi Yuan,

> -----Original Message-----
> From: Wang, YuanX <yuanx.w...@intel.com>
> Sent: Monday, October 24, 2022 11:15 PM
> To: Maxime Coquelin <maxime.coque...@redhat.com>; Xia, Chenbo
> <chenbo....@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu...@intel.com>; Jiang, Cheng1
> <cheng1.ji...@intel.com>; Ma, WenwuX <wenwux...@intel.com>; He, Xingguang
> <xingguang...@intel.com>; Wang, YuanX <yuanx.w...@intel.com>
> Subject: [PATCH v5] net/vhost: support asynchronous data path
> 
> Vhost asynchronous data-path offloads packet copy from the CPU
> to the DMA engine. As a result, large packet copy can be accelerated
> by the DMA engine, and vhost can free CPU cycles for higher level
> functions.
> 
> In this patch, we enable asynchronous data-path for vhostpmd.
> Asynchronous data path is enabled per tx/rx queue, and users need
> to specify the DMA device used by the tx/rx queue. Each tx/rx queue
> only supports to use one DMA device, but one DMA device can be shared
> among multiple tx/rx queues of different vhost PMD ports.
> 
> Two PMD parameters are added:
> - dmas:       specify the used DMA device for a tx/rx queue.
>       (Default: no queues enable asynchronous data path)
> - dma-ring-size: DMA ring size.
>       (Default: 4096).
> 
> Here is an example:
> --vdev
> 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-
> ring-size=4096'
> 
> Signed-off-by: Jiayu Hu <jiayu...@intel.com>
> Signed-off-by: Yuan Wang <yuanx.w...@intel.com>
> Signed-off-by: Wenwu Ma <wenwux...@intel.com>
> 

Sorry that I just realize that we need to change release notes because this
is new feature for vhost PMD. Please mention the async support and new driver
api you added.

Thanks,
Chenbo

Reply via email to