> -----Original Message----- > From: Alexey Kardashevskiy [mailto:a...@ozlabs.ru] > Sent: Tuesday, January 2, 2018 10:42 AM > To: Bie, Tiwei <tiwei....@intel.com>; virtio-...@lists.oasis-open.org; qemu- > de...@nongnu.org; m...@redhat.com; alex.william...@redhat.com; > pbonz...@redhat.com; stefa...@redhat.com > Cc: Tan, Jianfeng <jianfeng....@intel.com>; Liang, Cunming > <cunming.li...@intel.com>; Wang, Xiao W <xiao.w.w...@intel.com>; Wang, > Zhihong <zhihong.w...@intel.com>; Daly, Dan <dan.d...@intel.com> > Subject: Re: [Qemu-devel] [RFC 0/3] Extend vhost-user to support VFIO based > accelerators > > On 22/12/17 17:41, Tiwei Bie wrote: > > This RFC patch set does some small extensions to vhost-user protocol > > to support VFIO based accelerators, and makes it possible to get the > > similar performance of VFIO passthru while keeping the virtio device > > emulation in QEMU. > > > > When we have virtio ring compatible devices, it's possible to setup > > the device (DMA mapping, PCI config, etc) based on the existing info > > (memory-table, features, vring info, etc) which is available on the > > vhost-backend (e.g. DPDK vhost library). Then, we will be able to use > > such devices to accelerate the emulated device for the VM. And we call > > it vDPA: vhost DataPath Acceleration. The key difference between VFIO > > passthru and vDPA is that, in vDPA only the data path (e.g. ring, > > notify and queue interrupt) is pass-throughed, the device control path > > (e.g. PCI configuration space and MMIO regions) is still defined and > > emulated by QEMU. > > > > The benefits of keeping virtio device emulation in QEMU compared with > > virtio device VFIO passthru include (but not limit to): > > > > - consistent device interface from guest OS; > > - max flexibility on control path and hardware design; > > - leveraging the existing virtio live-migration framework; > > > > But the critical issue in vDPA is that the data path performance is > > relatively low and some host threads are needed for the data path, > > because some necessary mechanisms are missing to support: > > > > 1) guest driver notifies the device directly; > > 2) device interrupts the guest directly; > > > > So this patch set does some small extensions to vhost-user protocol to > > make both of them possible. It leverages the same mechanisms (e.g. > > EPT and Posted-Interrupt on Intel platform) as the VFIO passthru to > > achieve the data path pass through. > > > > A new protocol feature bit is added to negotiate the accelerator > > feature support. Two new slave message types are added to enable the > > notify and interrupt passthru for each queue. From the view of > > vhost-user protocol design, it's very flexible. The passthru can be > > enabled/disabled for each queue individually, and it's possible to > > accelerate each queue by different devices. More design and > > implementation details can be found from the last patch. > > > > There are some rough edges in this patch set (so this is a RFC patch > > set for now), but it's never too early to hear the thoughts from the > > community! So any comments and suggestions would be really appreciated! > > I am missing a lot of context here. Out of curiosity - how is this all > supposed to > work? QEMU command line example would be useful, what will the guest see? A > virtio device (i.e. Redhat vendor ID) or an actual PCI device (since VFIO is > mentioned)? Thanks.
It's a normal virtio PCIe devices in the guest. Extensions on the host are transparent to the guest. In terms of the usage, there's a sample may help. http://dpdk.org/ml/archives/dev/2017-December/085044.html The sample takes virtio-net device in VM as data path accelerator of virtio-net in nested VM. When taking physical device on bare metal, it accelerates virtio-net in VM equivalently. There's no additional params of QEMU command line needed for vhost-user. One more context, including vDPA enabling in DPDK vhost-user library. http://dpdk.org/ml/archives/dev/2017-December/084792.html > > > > -- > Alexey