The requirements for rte_eth_tx_burst(), which calls a driver specific function, in case of ixgbe, these two:
"It is the responsibility of the rte_eth_tx_burst() function to transparently free the memory buffers of packets previously sent. This feature is driven by the *tx_free_thresh* value supplied to the rte_eth_dev_configure() function at device configuration time. When the number of previously sent packets reached the "minimum transmit packets to free" threshold, the rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf* buffers of those packets whose transmission was effectively completed." Also rte_eth_tx_queue_setup() uses the same description for tx_free_thresh: "The *tx_free_thresh* value indicates the [minimum] number of network buffers that must be pending in the transmit ring to trigger their [implicit] freeing by the driver transmit function." And all the other poll mode drivers are using this formula. Plus I've described a possible hang situation in the commit message. On 28/05/15 11:50, Venkatesan, Venky wrote: > NAK. This causes more (unsuccessful) cleanup attempts on the descriptor ring. > What is motivating this change? > > Regards, > Venky > > >> On May 28, 2015, at 1:42 AM, Zoltan Kiss <zoltan.kiss at linaro.org> wrote: >> >> This check doesn't do what's required by rte_eth_tx_burst: >> "When the number of previously sent packets reached the "minimum transmit >> packets to free" threshold" >> >> This can cause problems when txq->tx_free_thresh + [number of elements in the >> pool] < txq->nb_tx_desc. >> >> Signed-off-by: Zoltan Kiss <zoltan.kiss at linaro.org> >> --- >> drivers/net/ixgbe/ixgbe_rxtx.c | 4 ++-- >> drivers/net/ixgbe/ixgbe_rxtx_vec.c | 2 +- >> 2 files changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c >> index 4f9ab22..b70ed8c 100644 >> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >> @@ -250,10 +250,10 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, >> >> /* >> * Begin scanning the H/W ring for done descriptors when the >> - * number of available descriptors drops below tx_free_thresh. For >> + * number of in flight descriptors reaches tx_free_thresh. For >> * each done descriptor, free the associated buffer. >> */ >> - if (txq->nb_tx_free < txq->tx_free_thresh) >> + if ((txq->nb_tx_desc - txq->nb_tx_free) > txq->tx_free_thresh) >> ixgbe_tx_free_bufs(txq); >> >> /* Only use descriptors that are available */ >> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> index abd10f6..f91c698 100644 >> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> @@ -598,7 +598,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf >> **tx_pkts, >> if (unlikely(nb_pkts > RTE_IXGBE_VPMD_TX_BURST)) >> nb_pkts = RTE_IXGBE_VPMD_TX_BURST; >> >> - if (txq->nb_tx_free < txq->tx_free_thresh) >> + if ((txq->nb_tx_desc - txq->nb_tx_free) > txq->tx_free_thresh) >> ixgbe_tx_free_bufs(txq); >> >> nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); >> -- >> 1.9.1 >>