Alright, using VNC helped immensely, but as it turns out removing
PCI:01:00.0 from the list of passed devices in order to investigate a
blue screen and adding it back in while the VM was live thinking it
would queue the operation for the next reboot did the trick.
Attempting to boot the VM with the GPU already bridged causes the boot
process to hang whether or not I pass a vBIOS file to qemu and vfio then
complains that my ROM contents are invalid, but booting the VM with the
sound subsystem alone (I figured it ensures that the IOMMU group is
grabbed by kvm, but I'm not aware of the technical details) and then
using virsh or virtmanager to attach the actual GPU device to the guest
works where everything I've been doing for the last week has failed.
While that is some counter-intuitive behavior, and I should probably
file a bug on this somewhere, I'm glad I at least got this working.
Thank you Will and Jonathan, and thank you vfio-users.
On 2016-01-19 11:24, Nicolas Roy-Renaud wrote:
On 2016-01-19 03:19, Jonathan Scruggs wrote:
Hi,
> Since the 970 is set at my primary GPU, it is responsible for
displaying my bios and bootloader until linux boots, where I have its
framebuffer disabled and vfio-pci latch onto it. The 210 GT, however,
is still managed by the nouveau driver.
Out of curiosity, why do you have the 970 set as the primary boot
display? I know on some motherboards the first slot will be the
fastest and the other ones have reduced PCI-e lanes when multiple
cards are used. However, my BIOS has a setting to override which slot
is the default. You may want to check if you have this AMD set the
210 as primary, or for testing purposes, swap the cards around to
take the 970 out of the primary slot. I have a 970 in a secondary
slot and it passes through correctly.
I know it's going to sound silly, but it's because my motherboard is
made in a way that if I put my (massive) 970 in the secondary port, I
end up blocking access to all my SATA ports and aligning one of the
intake fans with the PSU's outtake. Those are all things I could
technically work my way around, but unless it's not possible to get
the passthrough to work otherwise, it just seems like more trouble
than it's worth.
Also, I did check in my mobo settings for a way to change which card
should act as the boot device, but the only option that ressembles
that is an integrated/dedicated graphics switch.
In your win8 XML file, did you pass through the audio part of your
card or did I miss it. Even if you don't use HDMI audio, you can try
passing it in.
Right, I removed that line shortly before I decided come asking for
help. Whether it's there or not doesn't seem to change much.
On 2016-01-18 16:23, Will Marler wrote:
Well it looks to me like your GeForce GTX 970 is correctly being
claimed by vfio-pci, so I would expect that if you passed it to a VM,
the VM should be able to see it. I'd suggest removing <timer
name='hypervclock' present='yes'/> from your XML file, and accessing
it via VNC. You should be able to go into the Windows device manager
and see the video card there (where I actually think you'll see an
Error 43 currently, because of the hypervclock line).
Good thinking, I'll try to give the VNC server a shot. Also, from what
I've read, using hv_vendor_id= with recent (git) versions of qemu
fools the nvidia driver without requiring you to disable the hyperv
clock and other performence enchancing features, which is why I've
left them there.
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users