> when receive packets, it is possible to fail to reserve
> fill queue, since buffer ring is shared between tx and rx,
> and maybe not available temporary. at last, both fill
> queue and rx queue are empty.
> 
> then kernel side will be unable to receive packets due to
> empty fill queue, and dpdk will be unable to reserve fill
> queue because dpdk has not pakcets to receive, at last
> deadlock will happen
> 
> so move reserve fill queue before xsk_ring_cons__peek
> to fix it
> 
> Signed-off-by: Li RongQing <lirongq...@baidu.com>

Thanks for the fix. I tested and saw no significant performance drop.

Minor: the first line of the commit should read "net/af_xdp: ...."

Acked-by: Ciara Loftus <ciara.lof...@intel.com>

CC-ing stable as I think this fix should be considered for inclusion.

Thanks,
Ciara

> ---
>  drivers/net/af_xdp/rte_eth_af_xdp.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> b/drivers/net/af_xdp/rte_eth_af_xdp.c
> index 7ce4ad04a..2dc9cab27 100644
> --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> @@ -304,6 +304,10 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs,
> uint16_t nb_pkts)
>       uint32_t free_thresh = fq->size >> 1;
>       struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE];
> 
> +     if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
> +             (void)reserve_fill_queue(umem,
> ETH_AF_XDP_RX_BATCH_SIZE, NULL);
> +
> +
>       if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts)
> != 0))
>               return 0;
> 
> @@ -317,9 +321,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs,
> uint16_t nb_pkts)
>               goto out;
>       }
> 
> -     if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
> -             (void)reserve_fill_queue(umem,
> ETH_AF_XDP_RX_BATCH_SIZE, NULL);
> -
>       for (i = 0; i < rcvd; i++) {
>               const struct xdp_desc *desc;
>               uint64_t addr;
> --
> 2.16.2

Reply via email to