On Tue, Nov 19, 2013 at 5:11 PM, Stefan Hajnoczi <stefa...@gmail.com> wrote: > On Tue, Nov 19, 2013 at 11:17:40AM +0100, Antonios Motakis wrote: >> There have been discussions before on these lists on the topic of >> connecting a QEMU guest running a virtio_net driver, with an external >> userspace ethernet switch (Snabbswitch in particular). The essential >> requirement in this is to put the virtio backend in the external >> userspace process. >> >> The preferred direction should be similar to vhost, with the main >> difference of the control mechanism being a unix domain socket instead >> of an ioctl interface, and of course placing the backend in an >> userspace process instead the kernel. >> >> Since we are pursuing this direction,we would like to share a more >> detailed description of the architecture we are working on. Any >> feedback is most welcome. It is available here: >> http://www.virtualopensystems.com/media/snabbswitch/rfc_snabbswitch_qemu.pdf > > It sounds like you are proposing an interprocess virtio device > interface. QEMU's virtio-pci emulation calls into a new vapp virtio > device inside QEMU, which then forwards virtio device calls to the vapp > process. > > This is pretty different from vhost. vhost only puts rx/tx handling in > the kernel. Other functionality is handled by plain old QEMU virtio > device emulation (e.g. virtio-net ctrl virtqueue). >
Actually, our intended approach is not that different; it is still under the control of the vapp 'client' (QEMU) what virtqueues it will pass to the userspace process. There is nothing stopping us from doing the same thing and handle the ctrl virtqueue in virtio_net. In fact, we are currently evaluating the possibility to implement vapp not as a completely new component, but as a feature of vhost. We need to decouple some things, however it looks like it is the cleanest approach. This makes sense, because virito_net already knows how to 'coordinate' with vhost. To do this we need to decouple ioctl calls in vhost and add support for our (very similar) unix domain socket interface. Also, another main difference is that we do not need to set a TAP device (we are network backend agnostic, since we let the target process decide what to do with network data). > In the past device plugin interfaces have been rejected by the community > because they can lead to lower code quality (out-of-tree devices) and an > avenue to bypass the software license. > > So what's the alternative? Reuse as much of the vhost approach as > possible and define a userspace network I/O interface instead of a > device plugin interface. Point taken. It is one of our goals to have the code eventually upstreamed., so I hope with the above clarifications, our intended solution is deemed more acceptable. Antonios > > Stefan