On 12/8/2023 2:38 AM, Liangxing Wang wrote: > The refcnt update of stored mbufs in memif driver is redundant since > those mbufs are only freed in eth_memif_tx_zc(). No other place > can free those stored mbufs quietly. So remove the redundant mbuf > refcnt update in dpdk memif driver to avoid extra heavy cost. > Performance of dpdk memif zero copy tx is improved with this change. >
As mentioned above, since free is called only from 'eth_memif_tx_zc()', this change looks good to me. Did you measure the performance improvement, if so can you please share it? And addition to this being an optimization, it may be a required fix, can you please check following case: - When 'memif_tx_one_zc()' called, it has number of free slot information as parameter - If the mbuf is chained mbuf, only first mbuf reference is increased - If number of segment in the mbuf chain is bigger than free slot, function returns 0 - in this error case 'eth_memif_tx_zc()' breaks the sending look and returns - In this scenario application gives the decision to either free the mbuf or re-send it. But for this case application can't free the mbuf because of reference count which may cause memory leak - If application decides to re-send, reference count increased again, I guess eventually 'memif_free_stored_mbufs()' will decrease the refcount to be able to free it Assuming above is not done intentionally to make sure all mbufs are sent. This refcount prevent application discretion to drop packets, so your change is required to fix this. Can you please double check if I am missing anything? > Signed-off-by: Liangxing Wang <liangxing.w...@arm.com> > Reviewed-by: Ruifeng Wang <ruifeng.w...@arm.com> > --- > drivers/net/memif/rte_eth_memif.c | 6 ------ > 1 file changed, 6 deletions(-) > > diff --git a/drivers/net/memif/rte_eth_memif.c > b/drivers/net/memif/rte_eth_memif.c > index 7cc8c0da91..962d390b90 100644 > --- a/drivers/net/memif/rte_eth_memif.c > +++ b/drivers/net/memif/rte_eth_memif.c > @@ -265,8 +265,6 @@ memif_free_stored_mbufs(struct pmd_process_private > *proc_private, struct memif_q > cur_tail = __atomic_load_n(&ring->tail, __ATOMIC_ACQUIRE); > while (mq->last_tail != cur_tail) { > RTE_MBUF_PREFETCH_TO_FREE(mq->buffers[(mq->last_tail + 1) & > mask]); > - /* Decrement refcnt and free mbuf. (current segment) */ > - rte_mbuf_refcnt_update(mq->buffers[mq->last_tail & mask], -1); > rte_pktmbuf_free_seg(mq->buffers[mq->last_tail & mask]); > mq->last_tail++; > } > @@ -825,10 +823,6 @@ memif_tx_one_zc(struct pmd_process_private > *proc_private, struct memif_queue *mq > next_in_chain: > /* store pointer to mbuf to free it later */ > mq->buffers[slot & mask] = mbuf; > - /* Increment refcnt to make sure the buffer is not freed before server > - * receives it. (current segment) > - */ > - rte_mbuf_refcnt_update(mbuf, 1); > /* populate descriptor */ > d0 = &ring->desc[slot & mask]; > d0->length = rte_pktmbuf_data_len(mbuf);