The inline data size alignments should be taken into account either to conform the rdma-core implementation of sending queue size calculation.
Fixes: 7e14d144f2ea ("net/mlx5: fix Tx queue size created with DevX") Signed-off-by: Viacheslav Ovsiienko <viachesl...@nvidia.com> --- drivers/net/mlx5/mlx5_devx.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 1b1a72dd07..e4acab90c8 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -1077,15 +1077,18 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) * internally in the mlx5_calc_sq_size(), we do the same * for the queue being created with DevX at this point. */ - wqe_size = txq_data->tso_en ? txq_ctrl->max_tso_header : 0; + wqe_size = txq_data->tso_en ? + RTE_ALIGN(txq_ctrl->max_tso_header, MLX5_WSEG_SIZE) : 0; wqe_size += sizeof(struct mlx5_wqe_cseg) + sizeof(struct mlx5_wqe_eseg) + sizeof(struct mlx5_wqe_dseg); if (txq_data->inlen_send) - wqe_size = RTE_MAX(wqe_size, txq_data->inlen_send + - sizeof(struct mlx5_wqe_cseg) + - sizeof(struct mlx5_wqe_eseg)); - wqe_size = RTE_ALIGN_CEIL(wqe_size, MLX5_WQE_SIZE) / MLX5_WQE_SIZE; + wqe_size = RTE_MAX(wqe_size, sizeof(struct mlx5_wqe_cseg) + + sizeof(struct mlx5_wqe_eseg) + + RTE_ALIGN(txq_data->inlen_send + + sizeof(uint32_t), + MLX5_WSEG_SIZE)); + wqe_size = RTE_ALIGN(wqe_size, MLX5_WQE_SIZE) / MLX5_WQE_SIZE; /* Create Send Queue object with DevX. */ wqe_n = RTE_MIN((1UL << txq_data->elts_n) * wqe_size, (uint32_t)priv->sh->device_attr.max_qp_wr); -- 2.18.1