Hi Yuan, > -----Original Message----- > From: Wang, YuanX <yuanx.w...@intel.com> > Sent: Wednesday, September 22, 2021 4:56 PM > To: dev@dpdk.org > Cc: maxime.coque...@redhat.com; Xia, Chenbo <chenbo....@intel.com>; Hu, > Jiayu <jiayu...@intel.com>; Ding, Xuan <xuan.d...@intel.com>; Jiang, > Cheng1 <cheng1.ji...@intel.com>; Ma, WenwuX <wenwux...@intel.com>; > Yang, YvonneX <yvonnex.y...@intel.com>; Pai G, Sunil > <sunil.pa...@intel.com> > Subject: [PATCH v3 2/2] vhost: add thread-safe API for clearing in-flight > packets in async vhost > > This patch adds thread safe version for > clearing in-flight packets function.
Maybe the commit log can refined to be more accurate, like when will clear in-flight packets needed? What is the difference between thread safe and unsafe version(whether use lock I think). > > Signed-off-by: Yuan Wang <yuanx.w...@intel.com> > --- > doc/guides/prog_guide/vhost_lib.rst | 8 ++++- > lib/vhost/rte_vhost_async.h | 21 +++++++++++++ > lib/vhost/version.map | 1 + > lib/vhost/virtio_net.c | 49 +++++++++++++++++++++++++++++ > 4 files changed, 78 insertions(+), 1 deletion(-) Remember to add the new API explanation in 21.11 rel_note. Thanks, Xuan > > diff --git a/doc/guides/prog_guide/vhost_lib.rst > b/doc/guides/prog_guide/vhost_lib.rst > index 9ed544db7a..bc21c879f3 100644 > --- a/doc/guides/prog_guide/vhost_lib.rst > +++ b/doc/guides/prog_guide/vhost_lib.rst > @@ -300,7 +300,13 @@ The following is an overview of some key Vhost API > functions: > > * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count)`` > > - Clear inflight packets which are submitted to DMA engine in vhost async > data > + Clear in-flight packets which are submitted to async channel in vhost > + async data path without performing any locking. Completed packets are > + returned to applications through ``pkts``. > + > +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count)`` > + > + Clear in-flight packets which are submitted to async channel in vhost async > data > path. Completed packets are returned to applications through ``pkts``. > > * ``rte_vhost_async_try_dequeue_burst(vid, queue_id, mbuf_pool, pkts, > count, nr_inflight)`` > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h > index 5e2429ab70..a418e0a03d 100644 > --- a/lib/vhost/rte_vhost_async.h > +++ b/lib/vhost/rte_vhost_async.h > @@ -261,6 +261,27 @@ int rte_vhost_async_get_inflight(int vid, uint16_t > queue_id); > __rte_experimental > uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, > struct rte_mbuf **pkts, uint16_t count); > + > +/** > + * This function checks async completion status and clear packets for > + * a specific vhost device queue. Packets which are inflight will be > + * returned in an array. > + * > + * @param vid > + * ID of vhost device to clear data > + * @param queue_id > + * Queue id to clear data > + * @param pkts > + * Blank array to get return packet pointer > + * @param count > + * Size of the packet array > + * @return > + * Number of packets returned > + */ > +__rte_experimental > +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, > + struct rte_mbuf **pkts, uint16_t count); > + > /** > * This function tries to receive packets from the guest with offloading > * copies to the async channel. The packets that are transfer completed > diff --git a/lib/vhost/version.map b/lib/vhost/version.map > index 8eb7e92c32..b87d5906b8 100644 > --- a/lib/vhost/version.map > +++ b/lib/vhost/version.map > @@ -88,4 +88,5 @@ EXPERIMENTAL { > > # added in 21.11 > rte_vhost_async_try_dequeue_burst; > + rte_vhost_clear_queue; > }; > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c > index 21afcd1854..2bf8a511d5 100644 > --- a/lib/vhost/virtio_net.c > +++ b/lib/vhost/virtio_net.c > @@ -2154,6 +2154,55 @@ rte_vhost_clear_queue_thread_unsafe(int vid, > uint16_t queue_id, > return n_pkts_cpl; > } > > +uint16_t > +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, > uint16_t count) > +{ > + struct virtio_net *dev = get_device(vid); > + struct vhost_virtqueue *vq; > + uint16_t n_pkts_cpl = 0; > + > + if (!dev) > + return 0; > + > + VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); > + if (unlikely(queue_id >= dev->nr_vring)) { > + VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue > idx %d.\n", > + dev->vid, __func__, queue_id); > + return 0; > + } > + > + vq = dev->virtqueue[queue_id]; > + > + if (unlikely(!vq->async_registered)) { > + VHOST_LOG_DATA(ERR, "(%d) %s: async not registered for > queue id %d.\n", > + dev->vid, __func__, queue_id); > + return 0; > + } > + > + if (!rte_spinlock_trylock(&vq->access_lock)) { > + VHOST_LOG_DATA(ERR, > + "(%d) %s: failed to clear async queue id %d, > virtqueue busy.\n", > + dev->vid, __func__, queue_id); > + return 0; > + } > + > + if (queue_id % 2 == 0) > + n_pkts_cpl = vhost_poll_enqueue_completed(dev, > queue_id, pkts, count); > + else { > + if (unlikely(vq_is_packed(dev))) > + VHOST_LOG_DATA(ERR, > + "(%d) %s: async dequeue does not support > packed ring.\n", > + dev->vid, __func__); > + else > + n_pkts_cpl = > async_poll_dequeue_completed_split(dev, vq, queue_id, pkts, > + count, dev->flags & > VIRTIO_DEV_LEGACY_OL_FLAGS); > + } > + > + rte_spinlock_unlock(&vq->access_lock); > + > + return n_pkts_cpl; > +} > + > static __rte_always_inline uint32_t > virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, > struct rte_mbuf **pkts, uint32_t count) > -- > 2.25.1