From: "Patryk Ochal (Redge Technologies)" <patrykoc...@gmail.com>

If the vectorized Rx burst function runs short on available mbufs,
the CQ processing may write past the end of the RX software ring.

This happens because `rxq_cq_process_v()` populates the software ring
and accesses mbufs before validating the associated CQEs. If the number
of available mbufs is insufficient, this can result in out-of-bounds
access.

This patch adds a limit to ensure CQ processing does not exceed the
number of mbufs that have actually been replenished and posted.

Fixes: 03e0868b4cd7 ("net/mlx5: fix deadlock due to buffered slots in Rx SW 
ring")
Cc: ys...@mellanox.com
Cc: sta...@dpdk.org

Signed-off-by: Patryk Ochal (Redge Technologies) <patrykoc...@gmail.com>
---
 drivers/net/mlx5/mlx5_rxtx_vec.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 2363d7ed27..67a1e168d8 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -320,8 +320,10 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf 
**pkts,
        }
        elts_idx = rxq->rq_pi & e_mask;
        elts = &(*rxq->elts)[elts_idx];
+       /* Not to move past the allocated mbufs. */
+       pkts_n = RTE_MIN(pkts_n - rcvd_pkt, rxq->rq_ci - rxq->rq_pi);
        /* Not to overflow pkts array. */
-       pkts_n = RTE_ALIGN_FLOOR(pkts_n - rcvd_pkt, MLX5_VPMD_DESCS_PER_LOOP);
+       pkts_n = RTE_ALIGN_FLOOR(pkts_n, MLX5_VPMD_DESCS_PER_LOOP);
        /* Not to cross queue end. */
        pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
        pkts_n = RTE_MIN(pkts_n, q_n - cq_idx);
-- 
2.30.2

Reply via email to