On 6/28/2024 10:01 PM, Mihai Brodschi wrote: > rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate > new mbufs to be provided to the sender. The allocated mbuf pointers > are stored in a ring, but the alloc function doesn't implement index > wrap-around, so it writes past the end of the array. This results in > memory corruption and duplicate mbufs being received. >
Hi Mihai, I am not sure writing past the ring actually occurs. As far as I can see is to keep the ring full as much as possible, when initially 'head' and 'tail' are 0, it fills all ring. Later tails moves and emptied space filled again. So head (in modulo) is always just behind tail after refill. In next run, refill will only fill the part tail moved, and this is calculated by 'n_slots'. As this is only the size of the gap, starting from 'head' (with modulo) shouldn't pass the ring length. Do you observe this issue practically? If so can you please provide your backtrace and numbers that is showing how to reproduce the issue? > Allocate 2x the space for the mbuf ring, so that the alloc function > has a contiguous array to write to, then copy the excess entries > to the start of the array. > Even issue is valid, I am not sure about solution to double to buffer memory, but lets confirm the issue first before discussing the solution. > Fixes: 43b815d88188 ("net/memif: support zero-copy slave") > Cc: sta...@dpdk.org > Signed-off-by: Mihai Brodschi <mihai.brods...@broadcom.com> > --- > v2: > - fix email formatting > > --- > drivers/net/memif/rte_eth_memif.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/drivers/net/memif/rte_eth_memif.c > b/drivers/net/memif/rte_eth_memif.c > index 16da22b5c6..3491c53cf1 100644 > --- a/drivers/net/memif/rte_eth_memif.c > +++ b/drivers/net/memif/rte_eth_memif.c > @@ -600,6 +600,10 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, > uint16_t nb_pkts) > ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], > n_slots); > if (unlikely(ret < 0)) > goto no_free_mbufs; > + if (unlikely(n_slots > ring_size - (head & mask))) { > + rte_memcpy(mq->buffers, &mq->buffers[ring_size], > + (n_slots + (head & mask) - ring_size) * sizeof(struct > rte_mbuf *)); > + } > > while (n_slots--) { > s0 = head++ & mask; > @@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev) > } > mq->buffers = NULL; > if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) { > + /* > + * Allocate 2x ring_size to reserve a contiguous array > for > + * rte_pktmbuf_alloc_bulk (to store allocated mbufs). > + */ > mq->buffers = rte_zmalloc("bufs", sizeof(struct > rte_mbuf *) * > - (1 << mq->log2_ring_size), 0); > + (1 << (mq->log2_ring_size + > 1)), 0); > if (mq->buffers == NULL) > return -ENOMEM; > }