On Tue, Sep 29, 2020 at 02:09:55AM -0400, Michael S. Tsirkin wrote: > On Mon, Sep 28, 2020 at 10:25:37AM +0100, Stefan Hajnoczi wrote: > > Why extend vhost-user with vDPA? > > ================================ > > Reusing VIRTIO emulation code for vhost-user backends > > ----------------------------------------------------- > > It is a common misconception that a vhost device is a VIRTIO device. > > VIRTIO devices are defined in the VIRTIO specification and consist of a > > configuration space, virtqueues, and a device lifecycle that includes > > feature negotiation. A vhost device is a subset of the corresponding > > VIRTIO device. The exact subset depends on the device type, and some > > vhost devices are closer to the full functionality of their > > corresponding VIRTIO device than others. The most well-known example is > > that vhost-net devices have rx/tx virtqueues and but lack the virtio-net > > control virtqueue. Also, the configuration space and device lifecycle > > are only partially available to vhost devices. > > > > This difference makes it impossible to use a VIRTIO device as a > > vhost-user device and vice versa. There is an impedance mismatch and > > missing functionality. That's a shame because existing VIRTIO device > > emulation code is mature and duplicating it to provide vhost-user > > backends creates additional work. > > > The biggest issue facing vhost-user and absent in vdpa is > backend disconnect handling. This is the reason control path > is kept under QEMU control: we do not need any logic to > restore control path data, and we can verify a new backend > is consistent with old one.
I don't think using vhost-user with vDPA changes that. The VMM still needs to emulate a virtio-pci/ccw/mmio device that the guest interfaces with. If the device backend goes offline it's possible to restore that state upon reconnection. What have I missed? Regarding reconnection in general, it currently seems like a partially solved problem in vhost-user. There is the "Inflight I/O tracking" mechanism in the spec and some wording about reconnecting the socket, but in practice I wouldn't expect all device types, VMMs, or device backends to actually support reconnection. This is an area where a uniform solution would be very welcome too. There was discussion about recovering state in muser. The original idea was for the muser kernel module to host state that persists across device backend restart. That way the device backend can go away temporarily and resume without guest intervention. Then when the vfio-user discussion started the idea morphed into simply keeping a tmpfs file for each device instance (no special muser.ko support needed anymore). This allows the device backend to resume without losing state. In practice a programming framework is needed to make this easy and safe to use but it boils down to a tmpfs mmap. > > If there was a way to reuse existing VIRTIO device emulation code it > > would be easier to move to a multi-process architecture in QEMU. Want to > > run --netdev user,id=netdev0 --device virtio-net-pci,netdev=netdev0 in a > > separate, sandboxed process? Easy, run it as a vhost-user-net device > > instead of as virtio-net. > > Given vhost-user is using a socket, and given there's an elaborate > protocol due to need for backwards compatibility, it seems safer to > have vhost-user interface in a separate process too. Right, with vhost-user only the virtqueue processing is done in the device backend. The VMM still has to do the virtio transport emulation (pci, ccw, mmio) and vhost-user connection lifecycle, which is complex. Going back to Marc-André's point, why don't we focus on vfio-user so the entire device can be moved out of the VMM? Stefan
signature.asc
Description: PGP signature