Hello Xuan,

The main differences between virtio-loopback and vDUSE are mainly:

1) the data sharing mechanism

2) the Virtio/Vhost-user devices which are supported by each solution


In particular, Virtio-loopback implements a zero-copy memory mapping

mechanism, the data are directly accessible by the user-space and

supports vhost-user-blk, vhost-user-input, vhost-user-rng.


At the best of my knowledge, VDUSE is based on a bouncing buffer

mechanism which does not implement the zero-copy principle. In addition,

it supports vhost-user-blk and vhost-user-net only.

Kind regards,

Timos


On Tue, Apr 18, 2023 at 11:01 AM Xuan Zhuo <[email protected]>
wrote:

> On Thu, 13 Apr 2023 16:35:59 +0300, Timos Ampelikiotis <
> [email protected]> wrote:
> > Dear virtio-dev community,
> >
> > I would like to introduce you Virtio-loopback, Proof of Concept (PoC)
> that
> > we have been working on at Virtual Open Systems in the context of the
> > Automotive Grade Linux community (Virtualization & Containers expert
> > group - EG-VIRT).
> >
> > We consider this work as a PoC and we are not currently planning
> > upstream. However, if the zero-copy or any other aspect of this work
> > is interesting for other Virtio implementations, we would be glad to
> > discuss more.
>
> What is the difference between this and vduse?
>
> Thanks.
>
> >
> > Overview:
> > ---------
> >
> > Virtio-loopback is a new hardware abstraction layer designed for
> > non-Hypervisor
> > environments based on virtio. The main objective is to enable
> applications
> > communication with vhost-user devices in a non-hypervisor environment.
> >
> > More in details, Virtio-loopback's design consists of a new transport
> > (Virtio-loopback), a user-space daemon (Adapter), and a vhost-user
> device.
> > The data path has been implemented using the "zero-copy" principle, where
> > vhost-user devices access virtqueues directly in the kernel space. This
> > first
> > implementation supports multi-queues, does not require virtio-protocol
> > changes
> > and applies minor modifications to the vhost-user library. Supported
> > vhost-user
> > devices are today vhost-user-rng (both rust and C version),
> vhost-user-input
> > and vhost-user-blk.
> >
> > Motivation & requirements:
> > -------------------------
> >
> > 1. Enable the usage of the same user-space driver on both virtualized and
> >    non-virtualized environments.
> >
> > 2. Maximize performance with zero copy design principles
> >
> > 3. Applications using such drivers are unchanged and transparently
> running
> > in
> >    both virtualized or non-virtualized environment.
> >
> > Design description:
> > -------------------
> >
> > a) Component's description:
> > --------------------------
> >
> > The Virtio-loopback architecture consists of three main components
> > described below:
> >
> > 1) Driver: In order to route the VIRTIO communication in user-space
> >    virtio-loopback driver was implemented and consists of:
> >    - A new transport layer which is based on virtio-mmio and it is
> >      responsible of routing the read/write communication of the virtio
> >      device to the adapter binary.
> >    - A character device which works as an intermediate layer between the
> >      user-space components and the transport layer. The character device
> > helps
> >      the adapter to provide all the required information and initialize
> the
> >      transport and at the same time, provides direct access to the vrings
> >      from user-space. The access to the vrings is based on a memory
> mapping
> >      mechanism which gives the ability to the vhost-user device to read
> and
> >      write data directly into kernel's memory without the need of any
> copy.
> >
> > 2) Adapter: Implements the role that QEMU had in the corresponding
> > virtualized
> >    scenario. Specifically, combines the functionality of two main QEMU
> >    components, virtio-mmio transport emulation and vhost-user backend, in
> > order
> >    to work as a bridge between the transport and the vhost-user device.
> The
> > two
> >    main parts of the adapter are:
> >    - A vhost-user backend which is the main communication point with the
> >      vhost-user device.
> >    - A virtio-emulation which handles mostly the messages coming from the
> >      driver and translates them into vhost-user messages/actions.
> >
> > 3) Vhost-user device: This components required only minimal
> modifications to
> >    make the vrings directly accessible in kernel's memory.
> >
> > b) Communication between the virtio-loopback components:
> > -------------------------------------------------------
> >
> > After the describing the role of its component, a few details need to be
> > given
> > about how they interact with each other and the mechanisms used.
> >
> > 1) Transport & Adapter:
> >    - The two components share a communication data structure which
> describes
> >      the current read/write operation requested by the transport.
> >    - When this data structure has been filled with all the required
> > information
> >      the transport triggers and EventFD and waits. The adapter wakes up,
> > takes
> >      the corresponding actions and finally notifies and unlocks the
> > transport
> >      by calling an IOCTL system call.
> >    - Compared to the virtualized environment scenario, the adapter calls
> an
> >      IOCTL system call to the driver in place of an interrupt.
> >
> > 2) Adapter & Vhost-user device:
> >    - The mechanisms used between these two component are the same as in
> >      the virtualized environment case.
> >      a) A UNIX socket is in place to exchange any VHOST-USER messages.
> >      b) EventFDs are being used in order to trigger VIRTIO kick/call
> >         requests.
> >
> > 3) Transport & Vhost-user device:
> >    - Since the Vrings are allocated into the Kernel's memory, vhost-user
> >      device needs to communicate and request access from the
> virtio-loopback
> >      driver. These requirement is served by implementing MMAP and IOCTL
> > system
> >      calls in the driver.
> >
> > c) Vrings & Zero copy memory mapping mechanism:
> > ----------------------------------------------
> >
> > The vrings are allocated by the virtio driver into the kernel's memory
> space
> > and in order to be accessible by the user-space, especially by the
> > vhost-user
> > device, a new memory mapping mechanism needed to be created into the
> > virtio-loopback driver. The new memory mapping mechanism is based on a
> > page-fault handler which maps the accessed pages on-demand.
> >
> > Known issues & room for improvement:
> > -----------------------------------
> >
> > Known limitation found in the current implementation:
> > - The memory mapping mechanism needs improvements, in the current
> >   implementation the device can potentially access the whole kernel's
> >   memory. A more fine grained mmapping can be set by the kernel by
> >   narrowing down the memory block shared.
> >
> > Possible next development targets might be about:
> > - Security checks for the memory shared with the user-space (vhost
> > user-device)
> > - Add parallel device handling for the virtio-loopback transport and
> adapter
> > - Add support for more vhost-user devices
> >
> > More information:
> > ----------------
> >
> > The full description of the technology can be found in the links below:
> > - Virtio-loopback design document
> > <
> https://git.virtualopensystems.com/virtio-loopback/docs/-/blob/virtio-loopback-rfc/design_docs/EG-VIRT_VOSYS_virtio_loopback_design_v1.4_2023_04_03.pdf
> >
> > - How to test the technology
> > <
> https://git.virtualopensystems.com/virtio-loopback/docs/-/blob/virtio-loopback-rfc/README.md
> >
> >
> > Links for all the key components of the design can be found below:
> > 1) Virtio-loopback-transport
> > <
> https://git.virtualopensystems.com/virtio-loopback/loopback_driver/-/tree/virtio-loopback-rfc
> >
> > 2) Adapter
> > <
> https://git.virtualopensystems.com/virtio-loopback/adapter_app/-/tree/virtio-loopback-rfc
> >
> > 3) Vhost-user devices in Qemu
> > <
> https://git.virtualopensystems.com/virtio-loopback/qemu/-/tree/virtio-loopback-rfc
> >
> >
> > Virtio-loopback has been tested on RCAR-M3 board (AGL needlefish) and x86
> > systems (Fedora 37). The results have been found to be comparable with
> VDUSE
> > technology in virtio-blk case:
> > - Automotive Grade Linux All Member Meeting Spring (8-9/03/2023) -
> > Presentation
> > <
> https://static.sched.com/hosted_files/aglammspring2023/44/vosys_virtio-loopback-berlin_2023-03-08.pdf
> >
> >   + Activity done in the context of the AERO EU project (grant agreement
> No
> > 101092850)
> >
> > Thank you for taking the time to review this PoC,
> > I would appreciate your feedback and suggestions for improvements.
> >
> > Best regards,
> > Timos Ampelikiotis
> >
>

Reply via email to