> -----Original Message----- > From: Maxime Coquelin [mailto:maxime.coque...@redhat.com] > Sent: Thursday, March 22, 2018 5:06 AM > To: Wang, Zhihong <zhihong.w...@intel.com>; dev@dpdk.org > Cc: Tan, Jianfeng <jianfeng....@intel.com>; Bie, Tiwei > <tiwei....@intel.com>; y...@fridaylinux.org; Liang, Cunming > <cunming.li...@intel.com>; Wang, Xiao W <xiao.w.w...@intel.com>; Daly, > Dan <dan.d...@intel.com> > Subject: Re: [PATCH v3 2/5] vhost: support selective datapath > > > > On 02/27/2018 11:13 AM, Zhihong Wang wrote: > > This patch introduces support for selective datapath in DPDK vhost-user lib > > to enable various types of virtio-compatible devices to do data transfer > > with virtio driver directly to enable acceleration. The default datapath is > > the existing software implementation, more options will be available when > > new engines are registered. > > > > An engine is a group of virtio-compatible devices under a single address. > > The engine driver includes: > > > > 1. A set of engine ops is defined in rte_vdpa_eng_ops to perform engine > > init, uninit, and attributes reporting. > > > > 2. A set of device ops is defined in rte_vdpa_dev_ops for virtio devices > > in the engine to do device specific operations: > > > > a. dev_conf: Called to configure the actual device when the virtio > > device becomes ready. > > > > b. dev_close: Called to close the actual device when the virtio device > > is stopped. > > > > c. vring_state_set: Called to change the state of the vring in the > > actual device when vring state changes. > > > > d. feature_set: Called to set the negotiated features to device. > > > > e. migration_done: Called to allow the device to response to RARP > > sending. > > > > f. get_vfio_group_fd: Called to get the VFIO group fd of the device. > > > > g. get_vfio_device_fd: Called to get the VFIO device fd of the device. > > > > h. get_notify_area: Called to get the notify area info of the queue. > > > > Signed-off-by: Zhihong Wang <zhihong.w...@intel.com> > > --- > > Changes in v2: > > > > 1. Add VFIO related vDPA device ops. > > > > lib/librte_vhost/Makefile | 4 +- > > lib/librte_vhost/rte_vdpa.h | 126 > +++++++++++++++++++++++++++++++++ > > lib/librte_vhost/rte_vhost_version.map | 8 +++ > > lib/librte_vhost/vdpa.c | 124 > ++++++++++++++++++++++++++++++++ > > 4 files changed, 260 insertions(+), 2 deletions(-) > > create mode 100644 lib/librte_vhost/rte_vdpa.h > > create mode 100644 lib/librte_vhost/vdpa.c > > > > diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile > > index 5d6c6abae..37044ac03 100644 > > --- a/lib/librte_vhost/Makefile > > +++ b/lib/librte_vhost/Makefile > > @@ -22,9 +22,9 @@ LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf - > lrte_ethdev -lrte_net > > > > # all source are stored in SRCS-y > > SRCS-$(CONFIG_RTE_LIBRTE_VHOST) := fd_man.c iotlb.c socket.c vhost.c > \ > > - vhost_user.c virtio_net.c > > + vhost_user.c virtio_net.c vdpa.c > > > > # install includes > > -SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_vhost.h > > +SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_vhost.h > rte_vdpa.h > > > > include $(RTE_SDK)/mk/rte.lib.mk > > diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h > > new file mode 100644 > > index 000000000..23fb471be > > --- /dev/null > > +++ b/lib/librte_vhost/rte_vdpa.h > > @@ -0,0 +1,126 @@ > > +/* SPDX-License-Identifier: BSD-3-Clause > > + * Copyright(c) 2018 Intel Corporation > > + */ > > + > > +#ifndef _RTE_VDPA_H_ > > +#define _RTE_VDPA_H_ > > + > > +/** > > + * @file > > + * > > + * Device specific vhost lib > > + */ > > + > > +#include <rte_pci.h> > > +#include "rte_vhost.h" > > + > > +#define MAX_VDPA_ENGINE_NUM 128 > > +#define MAX_VDPA_NAME_LEN 128 > > + > > +struct rte_vdpa_eng_addr { > > + union { > > + uint8_t __dummy[64]; > > + struct rte_pci_addr pci_addr; > I think we should not only support PCI, but any type of buses. > At least in the API.
Exactly, so we defined a 64 bytes union so any bus types can be added without breaking the ABI. But there is one place that may be impacted is the is_same_eng() function. Maybe comparing all the bytes in __dummy[64] is a better way. What do you think?