Hi Maxime & Yuan,

> -----Original Message-----
> From: Wang, YuanX <yuanx.w...@intel.com>
> Sent: Wednesday, September 15, 2021 5:09 PM
> To: Xia, Chenbo <chenbo....@intel.com>; Ma, WenwuX <wenwux...@intel.com>;
> dev@dpdk.org
> Cc: maxime.coque...@redhat.com; Jiang, Cheng1 <cheng1.ji...@intel.com>; Hu,
> Jiayu <jiayu...@intel.com>; Pai G, Sunil <sunil.pa...@intel.com>; Yang,
> YvonneX <yvonnex.y...@intel.com>; Wang, Yinan <yinan.w...@intel.com>
> Subject: RE: [PATCH 1/4] vhost: support async dequeue for split ring
> 
> Hi Chenbo,
> 
> > -----Original Message-----
> > From: Xia, Chenbo <chenbo....@intel.com>
> > Sent: Wednesday, September 15, 2021 10:52 AM
> > To: Ma, WenwuX <wenwux...@intel.com>; dev@dpdk.org
> > Cc: maxime.coque...@redhat.com; Jiang, Cheng1 <cheng1.ji...@intel.com>;
> > Hu, Jiayu <jiayu...@intel.com>; Pai G, Sunil <sunil.pa...@intel.com>; Yang,
> > YvonneX <yvonnex.y...@intel.com>; Wang, YuanX
> > <yuanx.w...@intel.com>; Wang, Yinan <yinan.w...@intel.com>
> > Subject: RE: [PATCH 1/4] vhost: support async dequeue for split ring
> >
> > Hi,
> >
> > > -----Original Message-----
> > > From: Ma, WenwuX <wenwux...@intel.com>
> > > Sent: Tuesday, September 7, 2021 4:49 AM
> > > To: dev@dpdk.org
> > > Cc: maxime.coque...@redhat.com; Xia, Chenbo <chenbo....@intel.com>;
> > > Jiang,
> > > Cheng1 <cheng1.ji...@intel.com>; Hu, Jiayu <jiayu...@intel.com>; Pai
> > > G, Sunil <sunil.pa...@intel.com>; Yang, YvonneX
> > > <yvonnex.y...@intel.com>; Wang, YuanX <yuanx.w...@intel.com>; Ma,
> > > WenwuX <wenwux...@intel.com>; Wang, Yinan <yinan.w...@intel.com>
> > > Subject: [PATCH 1/4] vhost: support async dequeue for split ring
> > >
> > > From: Yuan Wang <yuanx.w...@intel.com>
> > >
> > > This patch implements asynchronous dequeue data path for split ring.
> > > A new asynchronous dequeue function is introduced. With this function,
> > > the application can try to receive packets from the guest with
> > > offloading copies to the async channel, thus saving precious CPU
> > > cycles.
> > >
> > > Signed-off-by: Yuan Wang <yuanx.w...@intel.com>
> > > Signed-off-by: Jiayu Hu <jiayu...@intel.com>
> > > Signed-off-by: Wenwu Ma <wenwux...@intel.com>
> > > Tested-by: Yinan Wang <yinan.w...@intel.com>
> > > ---
> > >  doc/guides/prog_guide/vhost_lib.rst |   9 +
> > >  lib/vhost/rte_vhost_async.h         |  36 +-
> > >  lib/vhost/version.map               |   3 +
> > >  lib/vhost/vhost.h                   |   3 +-
> > >  lib/vhost/virtio_net.c              | 531 ++++++++++++++++++++++++++++
> > >  5 files changed, 579 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/doc/guides/prog_guide/vhost_lib.rst
> > > b/doc/guides/prog_guide/vhost_lib.rst
> > > index 171e0096f6..9ed544db7a 100644
> > > --- a/doc/guides/prog_guide/vhost_lib.rst
> > > +++ b/doc/guides/prog_guide/vhost_lib.rst
> > > @@ -303,6 +303,15 @@ The following is an overview of some key Vhost
> > > API
> > > functions:
> > >    Clear inflight packets which are submitted to DMA engine in vhost
> > > async data
> > >    path. Completed packets are returned to applications through ``pkts``.
> > >
> > > +* ``rte_vhost_async_try_dequeue_burst(vid, queue_id, mbuf_pool, pkts,
> > > +count,
> > > nr_inflight)``
> > > +
> > > +  This function tries to receive packets from the guest with
> > > + offloading  copies to the async channel. The packets that are
> > > + transfer completed  are returned in ``pkts``. The other packets that
> > > + their copies are submitted  to the async channel but not completed are
> > called "in-flight packets".
> > > +  This function will not return in-flight packets until their copies
> > > + are  completed by the async channel.
> > > +
> > >  Vhost-user Implementations
> > >  --------------------------
> > >
> > > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> > > index ad71555a7f..5e2429ab70 100644
> > > --- a/lib/vhost/rte_vhost_async.h
> > > +++ b/lib/vhost/rte_vhost_async.h
> > > @@ -83,12 +83,18 @@ struct rte_vhost_async_channel_ops {
> > >           uint16_t max_packets);
> > >  };
> > >
> > > +struct async_nethdr {
> > > + struct virtio_net_hdr hdr;
> > > + bool valid;
> > > +};
> > > +
> >
> > As a struct exposed in public headers, it's better to prefix it with rte_.
> > In this case I would prefer rte_async_net_hdr.
> >
> > >  /**
> > > - * inflight async packet information
> > > + * in-flight async packet information
> > >   */
> > >  struct async_inflight_info {
> >
> > Could you help to rename it too? Like rte_async_inflight_info.
> 
> You are right, these two structs are for internal use and not suitable for
> exposure in the public header,
> but they are used for async channel, I think it's not suitable to be placed in
> other headers.
> Could you give some advice on which file to put them in?

@Maxime, What do you think of this? I think either changing it/renaming 
it/moving it
is ABI breakage. But since it's never used by any APP, I guess it's not big 
problem.
So what do you think we should do with the struct? I will vote for move it 
temporarily
to header like vhost.h. At some point, we can create a new internal async 
header for
structs like this. Or create it now?

@Yuan, I think again of the struct async_nethdr, do we really need to define 
this?
As for now, header being invalid only happens when 
virtio_net_with_host_offload(dev)
is false, right? So why not use this to know hdr invalid or not when you need 
to check?

Thanks,
Chenbo

> 
> >
> > >   struct rte_mbuf *mbuf;
> > > - uint16_t descs; /* num of descs inflight */
> > > + struct async_nethdr nethdr;
> > > + uint16_t descs; /* num of descs in-flight */
> > >   uint16_t nr_buffers; /* num of buffers inflight for packed ring */
> > > };
> > >
> > > @@ -255,5 +261,31 @@ int rte_vhost_async_get_inflight(int vid,
> > > uint16_t queue_id);  __rte_experimental  uint16_t
> > > rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> > >           struct rte_mbuf **pkts, uint16_t count);
> > > +/**
> > > + * This function tries to receive packets from the guest with
> > > +offloading
> > > + * copies to the async channel. The packets that are transfer
> > > +completed
> > > + * are returned in "pkts". The other packets that their copies are
> > > +submitted
> > > to
> > > + * the async channel but not completed are called "in-flight packets".
> > > + * This function will not return in-flight packets until their copies
> > > + are
> > > + * completed by the async channel.
> > > + *
> > > + * @param vid
> > > + *  id of vhost device to dequeue data
> > > + * @param queue_id
> > > + *  queue id to dequeue data
> >
> > Param mbuf_pool is missed.
> 
> Thanks, will fix it in next version.
> 
> Regards,
> Yuan
> 
> >
> > > + * @param pkts
> > > + *  blank array to keep successfully dequeued packets
> > > + * @param count
> > > + *  size of the packet array
> > > + * @param nr_inflight
> > > + *  the amount of in-flight packets. If error occurred, its value is
> > > + set to -
> > > 1.
> > > + * @return
> > > + *  num of successfully dequeued packets  */ __rte_experimental
> > > +uint16_t rte_vhost_async_try_dequeue_burst(int vid, uint16_t
> > > +queue_id,
> > > + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> > count,
> > > + int *nr_inflight);
> > >
> > >  #endif /* _RTE_VHOST_ASYNC_H_ */
> > > diff --git a/lib/vhost/version.map b/lib/vhost/version.map index
> > > c92a9d4962..1e033ad8e2 100644
> > > --- a/lib/vhost/version.map
> > > +++ b/lib/vhost/version.map
> > > @@ -85,4 +85,7 @@ EXPERIMENTAL {
> > >   rte_vhost_async_channel_register_thread_unsafe;
> > >   rte_vhost_async_channel_unregister_thread_unsafe;
> > >   rte_vhost_clear_queue_thread_unsafe;
> > > +
> > > + # added in 21.11
> > > + rte_vhost_async_try_dequeue_burst;
> > >  };
> > > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index
> > > 1e56311725..89a31e4ca8 100644
> > > --- a/lib/vhost/vhost.h
> > > +++ b/lib/vhost/vhost.h
> > > @@ -49,7 +49,8 @@
> >
> > [...]
> >
> > > +uint16_t
> > > +rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
> > > + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> > count,
> > > + int *nr_inflight)
> > > +{
> > > + struct virtio_net *dev;
> > > + struct rte_mbuf *rarp_mbuf = NULL;
> > > + struct vhost_virtqueue *vq;
> > > + int16_t success = 1;
> > > +
> > > + *nr_inflight = -1;
> > > +
> > > + dev = get_device(vid);
> > > + if (!dev)
> > > +         return 0;
> > > +
> > > + if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
> > > +         VHOST_LOG_DATA(ERR,
> > > +                 "(%d) %s: built-in vhost net backend is disabled.\n",
> > > +                 dev->vid, __func__);
> > > +         return 0;
> > > + }
> > > +
> > > + if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
> > > +         VHOST_LOG_DATA(ERR,
> > > +                 "(%d) %s: invalid virtqueue idx %d.\n",
> > > +                 dev->vid, __func__, queue_id);
> > > +         return 0;
> > > + }
> > > +
> > > + vq = dev->virtqueue[queue_id];
> > > +
> > > + if (unlikely(rte_spinlock_trylock(&vq->access_lock) == 0))
> > > +         return 0;
> > > +
> > > + if (unlikely(vq->enabled == 0)) {
> > > +         count = 0;
> > > +         goto out_access_unlock;
> > > + }
> > > +
> > > + if (unlikely(!vq->async_registered)) {
> > > +         VHOST_LOG_DATA(ERR, "(%d) %s: async not registered for
> > queue
> > > id %d.\n",
> > > +                 dev->vid, __func__, queue_id);
> > > +         count = 0;
> > > +         goto out_access_unlock;
> > > + }
> > > +
> > > + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
> > > +         vhost_user_iotlb_rd_lock(vq);
> > > +
> > > + if (unlikely(vq->access_ok == 0))
> > > +         if (unlikely(vring_translate(dev, vq) < 0)) {
> > > +                 count = 0;
> > > +                 goto out_access_unlock;
> > > +         }
> > > +
> > > + /*
> > > +  * Construct a RARP broadcast packet, and inject it to the "pkts"
> > > +  * array, to looks like that guest actually send such packet.
> > > +  *
> > > +  * Check user_send_rarp() for more information.
> > > +  *
> > > +  * broadcast_rarp shares a cacheline in the virtio_net structure
> > > +  * with some fields that are accessed during enqueue and
> > > +  * __atomic_compare_exchange_n causes a write if performed
> > compare
> > > +  * and exchange. This could result in false sharing between enqueue
> > > +  * and dequeue.
> > > +  *
> > > +  * Prevent unnecessary false sharing by reading broadcast_rarp first
> > > +  * and only performing compare and exchange if the read indicates it
> > > +  * is likely to be set.
> > > +  */
> > > + if (unlikely(__atomic_load_n(&dev->broadcast_rarp,
> > __ATOMIC_ACQUIRE) &&
> > > +                 __atomic_compare_exchange_n(&dev-
> > >broadcast_rarp,
> > > +                 &success, 0, 0, __ATOMIC_RELEASE,
> > __ATOMIC_RELAXED))) {
> > > +
> > > +         rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev-
> > >mac);
> > > +         if (rarp_mbuf == NULL) {
> > > +                 VHOST_LOG_DATA(ERR, "Failed to make RARP
> > packet.\n");
> > > +                 count = 0;
> > > +                 goto out;
> > > +         }
> > > +         count -= 1;
> > > + }
> > > +
> > > + if (unlikely(vq_is_packed(dev)))
> > > +         return 0;
> >
> > Should add a log here.
> >
> > Thanks,
> > Chenbo

Reply via email to