On Thu, Jun 16, 2022 at 4:38 PM David Marchand
<david.march...@redhat.com> wrote:
>
> On Mon, May 16, 2022 at 1:16 PM <xuan.d...@intel.com> wrote:
> > +static __rte_always_inline uint16_t
> > +virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue 
> > *vq,
> > +               struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, 
> > uint16_t count,
> > +               int16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags)
> > +{
> > +       static bool allocerr_warned;
> > +       bool dropped = false;
> > +       uint16_t free_entries;
> > +       uint16_t pkt_idx, slot_idx = 0;
> > +       uint16_t nr_done_pkts = 0;
> > +       uint16_t pkt_err = 0;
> > +       uint16_t n_xfer;
> > +       struct vhost_async *async = vq->async;
> > +       struct async_inflight_info *pkts_info = async->pkts_info;
> > +       struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST];
>
> Why do we need this array?
> Plus, see blow.
>
> > +       uint16_t pkts_size = count;
> > +
> > +       /**
> > +        * The ordering between avail index and
> > +        * desc reads needs to be enforced.
> > +        */
> > +       free_entries = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE) -
> > +                       vq->last_avail_idx;
> > +       if (free_entries == 0)
> > +               goto out;
> > +
> > +       rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 
> > 1)]);
> > +
> > +       async_iter_reset(async);
> > +
> > +       count = RTE_MIN(count, MAX_PKT_BURST);
^^^
Ok, my point about the overflow does not stand.
Just the pkts_prealloc array is probably useless.

> > +       count = RTE_MIN(count, free_entries);
> > +       VHOST_LOG_DATA(DEBUG, "(%s) about to dequeue %u buffers\n",
> > +                       dev->ifname, count);
> > +
> > +       if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count))
>
> 'count' is provided by the user of the vhost async dequeue public API.
> There is no check that it is not bigger than MAX_PKT_BURST.
>
> Calling rte_pktmbuf_alloc_bulk on a fixed-size array pkts_prealloc,
> allocated on the stack, it may cause a stack overflow.

The rest still stands for me.
vvv

>
>
>
> This code is mostly copy/pasted from the "sync" code.
> I see a fix on the stats has been sent.
> I point here another bug.
> There are probably more...
>
> <grmbl>
> I don't like how async code has been added in the vhost library by Intel.
>
> Maxime did a cleanup on the enqueue patch
> https://patchwork.dpdk.org/project/dpdk/list/?series=20020&state=%2A&archive=both.
> I see that the recent dequeue path additions have the same method of
> copying/pasting code and adding some branches in a non systematic way.
> Please clean this code and stop copy/pasting without a valid reason.
> </grmbl>


-- 
David Marchand

Reply via email to