Hi Maxime,

> -----Original Message-----
> From: Maxime Coquelin <maxime.coque...@redhat.com>
> Sent: Friday, June 26, 2020 10:29 PM
> To: Fu, Patrick <patrick...@intel.com>; dev@dpdk.org; Xia, Chenbo
> <chenbo....@intel.com>; Wang, Zhihong <zhihong.w...@intel.com>; Ye,
> Xiaolong <xiaolong...@intel.com>
> Cc: Jiang, Cheng1 <cheng1.ji...@intel.com>; Liang, Cunming
> <cunming.li...@intel.com>
> Subject: Re: [PATCH v1 1/2] vhost: introduce async data path registration API
> 
> 
> 
> On 6/11/20 12:02 PM, patrick...@intel.com wrote:
> > From: Patrick <patrick...@intel.com>
> >
> > diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
> > index 0a66ef9..f817783 100644
> > --- a/lib/librte_vhost/socket.c
> > +++ b/lib/librte_vhost/socket.c
> > @@ -42,6 +42,7 @@ struct vhost_user_socket {
> >     bool use_builtin_virtio_net;
> >     bool extbuf;
> >     bool linearbuf;
> > +   bool async_copy;
> >
> >     /*
> >      * The "supported_features" indicates the feature bits the @@ -210,6
> > +211,7 @@ struct vhost_user {
> >     size_t size;
> >     struct vhost_user_connection *conn;
> >     int ret;
> > +   struct virtio_net *dev;
> >
> >     if (vsocket == NULL)
> >             return;
> > @@ -241,6 +243,13 @@ struct vhost_user {
> >     if (vsocket->linearbuf)
> >             vhost_enable_linearbuf(vid);
> >
> > +   if (vsocket->async_copy) {
> > +           dev = get_device(vid);
> > +
> > +           if (dev)
> > +                   dev->async_copy = 1;
> > +   }
> > +
> >     VHOST_LOG_CONFIG(INFO, "new device, handle is %d\n", vid);
> >
> >     if (vsocket->notify_ops->new_connection) { @@ -891,6 +900,17 @@
> > struct vhost_user_reconnect_list {
> >             goto out_mutex;
> >     }
> >
> > +   vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY;
> > +
> > +   if (vsocket->async_copy &&
> > +           (flags & (RTE_VHOST_USER_IOMMU_SUPPORT |
> > +           RTE_VHOST_USER_POSTCOPY_SUPPORT))) {
> > +           VHOST_LOG_CONFIG(ERR, "error: enabling async copy and
> IOMMU "
> > +                   "or post-copy feature simultaneously is not "
> > +                   "supported\n");
> > +           goto out_mutex;
> > +   }
> > +
> 
> Have you ensure compatibility with the linearbuf feature (TSO)?
> You will want to do same kind of check if not compatible.
> 
I think this concern comes primarily from dequeue side. As current patch is for 
enqueue only, 
linearbuf check doesn't seem to be necessary. For future dequeue 
implementation, we may 
need to add an additional feature bit to let application to decide if the async 
callback is 
compatible with linearbuf or not. For a real hardware DMA engine, it should 
usually support 
linearbuf. For a pure software backend (something like dequeue-zero-copy), it 
may not support. 

> >     /*
> >      * Set the supported features correctly for the builtin vhost-user
> >      * net driver.
> > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index
> > 0266318..e6b688a 100644
> > --- a/lib/librte_vhost/vhost.c
> > +++ b/lib/librte_vhost/vhost.c
> > @@ -332,8 +332,13 @@
> >  {
> >     if (vq_is_packed(dev))
> >             rte_free(vq->shadow_used_packed);
> > -   else
> > +   else {
> >             rte_free(vq->shadow_used_split);
> > +           if (vq->async_pkts_pending)
> > +                   rte_free(vq->async_pkts_pending);
> > +           if (vq->async_pending_info)
> > +                   rte_free(vq->async_pending_info);
> > +   }
> >     rte_free(vq->batch_copy_elems);
> >     rte_mempool_free(vq->iotlb_pool);
> >     rte_free(vq);
> > @@ -1527,3 +1532,70 @@ int rte_vhost_extern_callback_register(int vid,
> >     if (vhost_data_log_level >= 0)
> >             rte_log_set_level(vhost_data_log_level,
> RTE_LOG_WARNING);  }
> > +
> > +int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > +                                   uint32_t features,
> > +                                   struct rte_vhost_async_channel_ops
> *ops) {
> > +   struct vhost_virtqueue *vq;
> > +   struct virtio_net *dev = get_device(vid);
> > +   struct dma_channel_features f;
> > +
> > +   if (dev == NULL || ops == NULL)
> > +           return -1;
> > +
> > +   f.intval = features;
> > +
> > +   vq = dev->virtqueue[queue_id];
> > +
> > +   if (vq == NULL)
> > +           return -1;
> > +
> > +   /** packed queue is not supported */
> > +   if (vq_is_packed(dev) || !f.inorder)
> > +           return -1;
> 
> You might want to print an error message to help the user understanding
> why it fails.
> 
Will update in v2 patch

> > +
> > +   if (ops->check_completed_copies == NULL ||
> > +           ops->transfer_data == NULL)
> > +           return -1;
> > +
> > +   rte_spinlock_lock(&vq->access_lock);
> > +
> > +   vq->async_ops.check_completed_copies = ops-
> >check_completed_copies;
> > +   vq->async_ops.transfer_data = ops->transfer_data;
> > +
> > +   vq->async_inorder = f.inorder;
> > +   vq->async_threshold = f.threshold;
> > +
> > +   vq->async_registered = true;
> > +
> > +   rte_spinlock_unlock(&vq->access_lock);
> > +
> > +   return 0;
> > +}
> > +
> > +int rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) {
> > +   struct vhost_virtqueue *vq;
> > +   struct virtio_net *dev = get_device(vid);
> > +
> > +   if (dev == NULL)
> > +           return -1;
> > +
> > +   vq = dev->virtqueue[queue_id];
> > +
> > +   if (vq == NULL)
> > +           return -1;
> > +
> > +   rte_spinlock_lock(&vq->access_lock);
> 
> We might want to wait all async transfers are done berfore unregistering?
> 
This could be a little bit tricky. In our async enqueue API design, 
applications send mbuf to DMA engine 
from one API call and get finished mbuf back from another API call. If we want 
to drain DMA buffer at this 
unregistration API, we need to either return mbuf from unregistration, or save 
the mbuf and return it somewhere else. 
I'm thinking if it's better to just report error in unregistration in case 
in-flight packets existing and rely on application to 
poll out all in-flight packets.

Reply via email to