Refill all consumed descriptors instead of refilling a set amount of descriptors everytime. Also, lower the rx-free-thresh to refill more aggressively. This yields in fewer packet drops for DQO queue format.
Signed-off-by: Rushil Gupta <rush...@google.com> Reviewed-by: Joshua Washington <joshw...@google.com> --- drivers/net/gve/gve_ethdev.h | 2 +- drivers/net/gve/gve_rx_dqo.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index c9bcfa553c..fe7095ed76 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -27,7 +27,7 @@ #define PCI_MSIX_FLAGS 2 /* Message Control */ #define PCI_MSIX_FLAGS_QSIZE 0x07FF /* Table size */ -#define GVE_DEFAULT_RX_FREE_THRESH 512 +#define GVE_DEFAULT_RX_FREE_THRESH 64 #define GVE_DEFAULT_TX_FREE_THRESH 32 #define GVE_DEFAULT_TX_RS_THRESH 32 #define GVE_TX_MAX_FREE_SZ 512 diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index 236aefd2a8..7e7ddac48e 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -12,8 +12,8 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) { volatile struct gve_rx_desc_dqo *rx_buf_ring; volatile struct gve_rx_desc_dqo *rx_buf_desc; - struct rte_mbuf *nmb[rxq->free_thresh]; - uint16_t nb_refill = rxq->free_thresh; + struct rte_mbuf *nmb[rxq->nb_rx_hold]; + uint16_t nb_refill = rxq->nb_rx_hold; uint16_t nb_desc = rxq->nb_rx_desc; uint16_t next_avail = rxq->bufq_tail; struct rte_eth_dev *dev; -- 2.42.0.459.ge4e396fd5e-goog