On Wed, 15 Mar 2017 04:50:40 +0100 Manuel Ullmann <ullman.al...@posteo.de> wrote:
> > Now what I want to do is that rebinding the GPU to the i915 drivers > > after having started the vm. > > Do you have any ways or advices? Thanks. > In theory, this would be done with GVT-g. But it is considered unstable > by its developers and has not a very convincing kernel option help > [1]. It might also still require patches for Seabios and qemu, although > I have found a support note for qemu-2.5 regarding Xengt. Most of its > development happened outside of upstream trees, most likely to get not > limited by QA measurements. > > Since Alex develops vfio, which tries to prevent device intervention and > thus makes sharing impossible, but passthrough possible in a quite > stable way, I’m not sure, whether we would see another helpful blog post > about this, once it reaches stability. Heh, I hadn't read the Kconfig option before, it's definitely more than a stub, but whether it's ready for production is a different story. As of v4.10, KVMGT is present and functional, but perhaps not very pretty or stable. The benefit and curse of the new vfio mediated device interface is that it allows exposing software defined devices. These can be purely virtual devices in the host kernel, such as the sample PCI serial device, or portions of real devices, such as we have with KVMGT and NVIDIA vGPU. IOMMU-based isolation works when the IOMMU hardware can distinguish transactions from different devices, which reference a different set of IOMMU page tables per device. In the case of a portion of a device, all transactions look the same, so we can't use the IOMMU for more than course grained isolation. This is theoretically not really a problem when that device is a GPU, because the GPU itself has support for per process GTTs (Graphics Translation Tables), which provide translation and isolation between processes, and we can overload this to provide isolation between each user of said portion of the GPU. What this boils down to though, and the reason that mediated devices are a benefit and a curse, is that we have no central, shared, hardware isolation for the devices. The "vendor" drivers, as we're calling them, are responsible for providing the isolation via device specific means, such as these per-process GTTs. They also need to make sure that whatever command channel they use between the virtual device and the physical device is properly "mediated" such that the virtual device cannot break out of its isolation or cause other badness (thus the name of this infrastructure). The degree of and quality of the isolation for a mediated device therefore lies squarely on the vendor driver. Let's just say that KVMGT has not instilled a great deal of confidence that such considerations have been fully embodied. It's getting better though, the code in v4.11 has improved and we'll see over time if it really has the robustness that we expect. It's really quite a cool feature though and as Manuel suggests, it does obviate the need for re-binding IGD to any drivers. Simply create the vGPU type you want, make use of it in the guest while simultaneously using the physical device in the host, shutdown the VM and remove the vGPU from the host. There's not yet any support for connecting a vGPU to a physical display, for now you'll need to use guest-based remoting tools (ex. VNC). Above the kernel, QEMU-2.7 or newer should work just fine. libvirt support is under development and will likely be in their next release, at least for managing VMs with pre-created mediate devices. SPICE support is also under development. I'm not sure what, if any, plans Intel has for physical displays. Guest Windows drivers are not yet released AFAIK and Linux guests should probably use a v4.8 kernel, or perhaps v4.10+ as there was a regression in v4.9 (or please go find bugs using other kernels). If you're an early adopter and have a Broadwell+ system, give it a try, particularly if you're willing to report bugs on their mailing list <igv...@lists.01.org>. Thanks, Alex _______________________________________________ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users