Ah, great! I was waiting for someone to test those instructions.

Looks like adding the PCI root port is a common mistake here, I really should mention it somewhere. Same thing about the CPU model, I should probably reccomand that people use host-passthrough before they scroll down further and go with core2duo instead right at the start.

Looks like it all went pretty well aside from that, though, which is great!

- Nicolas

On 2016-05-13 19:16, Okky Hendriansyah wrote:
On Sat, May 14, 2016 at 3:32 AM, <christop...@padarom.io <mailto:christop...@padarom.io>> wrote:

    Hello guys,


Hi Christopher,

    I'm pretty new to VFIO and GPU passthrough. I have set everything
    up now according to the "PCI Passthrough via OVMF" guide on the
    ArchLinux wiki, but have run into some issues that I couldn't get
    a solution for anywhere.
    The relevant part of my setup is as follows:
    Intel i5 4690K
    WindForce GTX 660 Ti
    AsRock Z97M Anniversary Mainboard
    Host OS: Arch Linux (Linux 4.5.4-1-ARCH)
    Guest OS: Windows 10 64-Bit Home
    I have set up the VM via virt-manager and installed it onto a RAW
    image that's on one of my SSDs (50GB).
    The IOMMU group that's relevant for VFIO is this one:
    IOMMU group 1
    00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th
    Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
    01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104
    [GeForce GTX 660 Ti] [10de:1183] (rev a1)
    01:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio
    Controller [10de:0e0a] (rev a1)
    I set up my vfio.conf accordingly:
    # cat /etc/modprobe.d/vfio.conf
    options vfio-pci ids=10de:1183,10de:0e0a,8086:0c01


You just have to pass your GPU PCI IDs here, do not attempt to pass the PCI bridge! So skip the *8086:0c01* and just pass *options vfio-pci ids=10de:1183,10de:0e0a*. After updating this config, make sure to regenerate kernel image by issuing *sudo mkinitcpio -p linux*, assuming that you already put all VFIO modules in */etc/mkinitcpio.conf* as per Arch Linux Wiki.

    And it initialized correctly:
    # dmesg | grep -i vfio
    [    0.309866] VFIO - User Level meta-driver version: 0.3
    [    0.323490] vfio_pci: add [10de:1183[ffff:ffff]] class
    0x000000/00000000
    [    0.336838] vfio_pci: add [10de:0e0a[ffff:ffff]] class
    0x000000/00000000
    [    0.336845] vfio_pci: add [8086:0c01[ffff:ffff]] class
    0x000000/00000000
    I passed both 01:00.0 and 01:00.1 to my VM and when booting it up
    it shows up on my secondary monitor that's connected to my
    graphics card. As a CPU I used the "core2duo" model.


If you do not have issues using *host* model in plain QEMU script or *host-passthrough* if using *virt-manager*, I think it is preferrable to use that instead of *core2duo*. Since the guest can see your exact CPU model on the host.

    Side question: I can't start my VM with 00:01.0 attached, but
    thought I had to as it's in the same IOMMU group. Do I even have
    to add it to my vfio.conf then?


No, you just have to passthrough your GPU PCI IDs, not the PCI Bridge.

    The problem I'm having is installing the driver for the graphics
    card. My display resolution is stuck at something like 640x480. I
    have tried installing 365.19 and 364.72, after about 5% progress
    my windows crashes into a bluescreen with the error
    "SYSTEM_THREAD_EXCEPTION_NOT_HANDLED (nvlddmkm.sys)", no matter
    whether I install it in GeForce Experience, directly through the
    installer or the device manager.
    I have also tried uninstalling the original driver with DDU, which
    didn't help either.


Please try to skip the PCI Bridge first and use the host or host-passthrogh model in your CPU config. Oh and do not forget to hide KVM CPUID and use Hyper-V Vendor ID to hide virtualization from NVIDIA driver. I have encountered that kind of exception message when upgrading to Windows 10 and to Windows 10 Threshold 2 and I worked around it by switching the CPU model temporarily to *core2duo*, apply DDU and revert it back to *host* again.

    GPU-Z correctly identifies my GPU as a GK104, but it (and my
    device manager) both only show "Device" as it's name. I don't know
    whether that is problematic or not.
    Does anyone have any ideas how to fix this?
    Sorry for the lengthy post, but looking forward to any and all
    answers, thanks!


Best regards,
Okky Hendriansyah


_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to