I've been testing using FreeBSD under Parallels on a MacBook Pro M4 MAX,
although the issue below and its handling may not be specific to aarch64
contexts.

After (from a demsg -a from a verbose boot):

. . .
000.000078 [ 452] vtnet_netmap_attach       vtnet attached txq=1, txd=128 
rxq=1, rxd=128
pci0: <unknown> at device 9.0 (no driver attached)
virtio_pci1: <VirtIO PCI (modern) GPU adapter> mem 
0x10000000-0x17ffffff,0x18008000-0x18008fff,0x18000000-0x18003fff at device 
10.0 on pci0
vtgpu0: <VirtIO GPU> on virtio_pci1
virtio_pci1: host features: 0x100000000 <Version1>
virtio_pci1: negotiated features: 0x100000000 <Version1>
virtio_pci1: attempting to allocate 1 MSI-X vectors (2 supported)
virtio_pci1: attempting to allocate 2 MSI-X vectors (2 supported)
pcib0: matched entry for 0.10.INTA
pcib0: slot 10 INTA hardwired to IRQ 39
virtio_pci1: using legacy interrupt
VT: Replacing driver "efifb" with new "virtio_gpu".

I end have no console. I ended up in a state where it
turned out booting went to stand-alone mode for a manual
fsck. So: no ssh access or any other access. I ended up
using the Windows Dev Kit 2023 with the boot device in
order figure out what was going on and to the the needed
fsck.

Turns out that if I'm building, installing, and booting
my own kernel, there is a way around that replacement
of efifb by using:

nodevice        virtio_gpu

in the kernel configuration, so that the boot ends up
using efifb (no replacement).

If course, this does not help with kernels from official
FreeBSD builds.

Is there a way to disable virtio_gpu for something that
runs an official kernel build (where virtio_gpu is
built into the kernel)?

===
Mark Millard
marklmi at yahoo.com


Reply via email to