Hi,

We've been working with Panasonic to expand the testing of VirtIO across
a range of hypervisors and VMMs. We've tackled this with two approaches:

  - simple unikernel to verify features and basic functions common
  - rootfs images to exercise the whole device

The unikernel utilizes rcore-os's no_std VirtIO drivers to discover and
initialize a range of VirtIO devices. The tests mostly focus on checking
no unknown feature bits are advertised as well as ensuring features
dependencies are properly realizable. This is useful for those
interested in moving workloads between hypervisors to ensure you don't
end up relying on a proprietary feature that aren't available elsewhere.

The virgl test also does some very basic blob mapping to check the
underlying mechanics are working.

The unikernel outputs a TAP stream to make integrating into a test
harness easier.

You can find the current state here:

  https://git.codelinaro.org/manos.pitsidianakis/virtio-tests

While we only build an aarch64 unikernel the upstream drivers have
examples for riscv and x86_64 as well.

As more complex VirtIO devices like GPUs tend to have a significant
user-space component, we have also started building Linux rootfs images
to exercise those. The images themselves are built against a baseline
architecture so they can be used on as wide a range of hardware as
possible. They have been built with buildroot to make them lightweight
and as close to the upstream projects as possible without relying on
particular distro support. You may have seen the recent aarch64 GPU
image being added to QEMU's functional tests recently:

  
https://gitlab.com/qemu-project/qemu/-/blob/master/tests/functional/test_aarch64_virt_gpu.py

I'm currently working on a similar image utilizing a subset of the
blktests project to exercise the VirtIO block devices. The various
recipes can be found here:

  https://gitlab.com/stsquad/buildroot/-/tree/adding-blktests?ref_type=heads

although the intention is for all the recipes and basic QEMU based tests
to be up-streamed into buildroot in due course. While I will no doubt
expand the functional tests in QEMU over time to utilize these images,
there is a wider question of where would be a good place to host a more
comprehensive VirtIO test suite? While useful for validating proprietary
hypervisors there are also a bunch of other VMMs and VirtIO backends
other than QEMU:

  - rust-vmm's vhost-device collection of vhost-user backends
  - CrosVM
  - Cloud Hypervisor
  - libkrun

As well as using VirtIO backends with other hypervisors such as Xen,
Gunyah and WHPX.

So this brings me to the question posed in the subject. Where would a
good place be to host these conformance tests?

My initial thought was to see if this is something OASIS could host as
part of the specification however I'm not sure OASIS is set up for such a
thing.

We could host it as part of the QEMU project as a service to the wider
community. While it should be pretty easy to expand QEMU's own tests to
work with multiple hypervisors, its test machinery isn't really setup
for non-QEMU VMMs. Ideally we would want the core repository to be able
to run on multiple hypervisors and use different VMMs and backends
depending on where they are being run.

The other option I considered was hosting with the rust-vmm project -
although maybe that just makes sense for the unikernel tests as they are
rust based. We certainly need more automated testing of the vhost-device
repository which can serve as a backend to multiple VMMs and
hypervisors.

So what do people think? Where would be a good place for common test
repository to live?

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro

Reply via email to