Hi,

> -----Original Message-----
> From: Viacheslav Ovsiienko <viachesl...@nvidia.com>
> Sent: Thursday, February 4, 2021 2:04 PM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasl...@nvidia.com>; Matan Azrad
> <ma...@nvidia.com>; Ori Kam <or...@nvidia.com>; NBU-Contact-Thomas
> Monjalon <tho...@monjalon.net>; sta...@dpdk.org
> Subject: [PATCH] net/mlx5: fix Tx queue size created with DevX
> 
> The number of descriptors specified for queue creation
> implies the queue should be able to contain the specified
> amount of packets being sent. Typically one packet takes
> one queue descriptor (WQE) to be handled. If there is inline
> data option enabled one packet might require more WQEs to
> embrace the inline data and the overall queue size (the
> number of queue descriptors) should be adjusted accordingly.
> 
> In mlx5 PMD the queues can be created either via Verbs, using
> the rdma-core library or via DevX as direct kernel/firmware call.
> The rdma-core does queue size adjustment internally, depending on
> TSO and inline setting. The DevX approach missed this point.
> This caused the queue size discrepancy and performance variations.
> 
> The patch adjusts the Tx queue size for the DexV approach
> in the same as it is done in rdma-core implementation.
> 
> Fixes: 86d259cec852 ("net/mlx5: separate Tx queue object creations")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viachesl...@nvidia.com>
Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

Reply via email to