Hi Maxime, > -----Original Message----- > From: Maxime Coquelin [mailto:maxime.coque...@redhat.com] > Sent: Saturday, March 31, 2018 7:27 PM > To: Wang, Xiao W <xiao.w.w...@intel.com>; Yigit, Ferruh > <ferruh.yi...@intel.com> > Cc: dev@dpdk.org; Wang, Zhihong <zhihong.w...@intel.com>; > y...@fridaylinux.org; Tan, Jianfeng <jianfeng....@intel.com>; Bie, Tiwei > <tiwei....@intel.com>; Liang, Cunming <cunming.li...@intel.com>; Daly, Dan > <dan.d...@intel.com>; tho...@monjalon.net; gaetan.ri...@6wind.com; > Burakov, Anatoly <anatoly.bura...@intel.com>; Xu, Rosen > <rosen...@intel.com> > Subject: Re: [PATCH v3 3/4] net/ifcvf: add ifcvf vdpa driver > > > > On 03/31/2018 04:29 AM, Xiao Wang wrote: > > The IFCVF vDPA (vhost data path acceleration) driver provides support for > > the Intel FPGA 100G VF (IFCVF). IFCVF's datapath is virtio ring compatible, > > it works as a HW vhost backend which can send/receive packets to/from > > virtio directly by DMA. > > > > Different VF devices serve different virtio frontends which are in > > different VMs, so each VF needs to have its own DMA address translation > > service. During the driver probe a new container is created, with this > > container vDPA driver can program DMA remapping table with the VM's > memory > > region information. > > > > Key vDPA driver ops implemented: > > > > - ifcvf_dev_config: > > Enable VF data path with virtio information provided by vhost lib, > > including IOMMU programming to enable VF DMA to VM's memory, VFIO > > interrupt setup to route HW interrupt to virtio driver, create notify > > relay thread to translate virtio driver's kick to a MMIO write onto HW, > > HW queues configuration. > > > > - ifcvf_dev_close: > > Revoke all the setup in ifcvf_dev_config. > > > > Live migration feature is supported by IFCVF and this driver enables > > it. For the dirty page logging, VF helps to log for packet buffer write, > > driver helps to make the used ring as dirty when device stops. > > > > Because vDPA driver needs to set up MSI-X vector to interrupt the > > guest, only vfio-pci is supported currently. > > > > Signed-off-by: Xiao Wang <xiao.w.w...@intel.com> > > Signed-off-by: Rosen Xu <rosen...@intel.com> > > --- > > config/common_base | 7 + > > config/common_linuxapp | 1 + > > drivers/net/Makefile | 3 + > > drivers/net/ifc/Makefile | 36 ++ > > drivers/net/ifc/base/ifcvf.c | 329 +++++++++++++ > > drivers/net/ifc/base/ifcvf.h | 160 +++++++ > > drivers/net/ifc/base/ifcvf_osdep.h | 52 +++ > > drivers/net/ifc/ifcvf_vdpa.c | 842 > ++++++++++++++++++++++++++++++++++ > > drivers/net/ifc/rte_ifcvf_version.map | 4 + > > mk/rte.app.mk | 3 + > > 10 files changed, 1437 insertions(+) > > create mode 100644 drivers/net/ifc/Makefile > > create mode 100644 drivers/net/ifc/base/ifcvf.c > > create mode 100644 drivers/net/ifc/base/ifcvf.h > > create mode 100644 drivers/net/ifc/base/ifcvf_osdep.h > > create mode 100644 drivers/net/ifc/ifcvf_vdpa.c > > create mode 100644 drivers/net/ifc/rte_ifcvf_version.map > > Thanks for having handled the changes, please see minor comments below. > > Feel free to add my: > Reviewed-by: Maxime Coquelin <maxime.coque...@redhat.com> > > Thanks! > Maxime > > > +static uint64_t > > +qva_to_gpa(int vid, uint64_t qva) > > We might want to have this in vhost-lib to avoid duplication, > but that can be done later. > > > +{ > > + struct rte_vhost_memory *mem = NULL; > > + struct rte_vhost_mem_region *reg; > > + uint32_t i; > > + uint64_t gpa = 0; > > + > > + if (rte_vhost_get_mem_table(vid, &mem) < 0) > > + goto exit; > > + > > + for (i = 0; i < mem->nregions; i++) {
[...] > > + > > +struct rte_vdpa_dev_ops ifcvf_ops = { > > + .queue_num_get = ifcvf_get_queue_num, > > + .feature_get = ifcvf_get_vdpa_feature, > > + .protocol_feature_get = ifcvf_get_protocol_feature, > > I have proposed in vDPA series to rename the ops so that it is > consistant with Vhost-user protocol: > e.g. get_protocol_features, get_features... > > So you might have to rebase if this is change is implemented. > Will rebase on the latest vDPA series. Thanks, Xiao