I just upgraded from a kernel+userland of 15-CURRENT from a release 20 days ago 
(git commit 565c887) to one today (git commit e39e6be). Afterwards, I've 
noticed two things that are probably really the same thing:

- My Windows Server 2023 VM says that there's not enough resources for COM2 & 
COM4
- My OpenBSD 7.3 VM dies with:

acpicmos0 at acpi0
"Bhyve_V_Gen_Counter_V1" at acpi0 not configured
cpu0: using VERW MDS workaround
pvbus0 at mainbus0: bhyve
pci0 at mainbus0 bus 0
0:3:0: io address conflict 0xc000/0x80
0:5:0: io address conflict 0xc080/0x40
pchb0 at pci0 dev 0 function 0 vendor "AMD", unknown product 0x7432 rev 0x00
virtio0 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00virtio0: 
can't map i/o space
: Cannot attach (5)
virtio1 at pci0 dev 5 function 0 "Qumranet Virtio Network" rev 0x00
vio0 at virtio1: address 1e:17:37:23:2f:cb
virtio1: msix per-VQ

In this case, 0:3:0 is the virtio-blk device and 0:5:0 is the virtio-net device.

If I boot it using a snapshot install74.img then it dies at:

acpicmos0 at acpi0
"Bhyve_V_Gen_Counter_V1" at acpi0 not configured
cpu0: using VERW MDS workaround
pvbus0 at mainbus0: bhyve
pci0 at mainbus0 bus 0
0:2:0: io address conflict 0xc080/0x80
0:3:0: io address conflict 0xc000/0x80
0:5:0: io address conflict 0xc100/0x40
pchb0 at pci0 dev 0 function 0 vendor "AMD", unknown product 0x7432 rev 0x00
virtio0 at pci0 dev 2 function 0 "Qumranet Virtio Storage" rev 0x00virtio0: 
can't map i/o space
: Cannot attach (5)
virtio1 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
vioblk0 at virtio1
scsibus0 at vioblk0: 1 targets
sd0 at scsibus0 targ 0 lun 0: <VirtIO, Block Device, >
sd0: 8192MB, 512 bytes/sector, 16777216 sectors
virtio1: msix per-VQ
virtio2 at pci0 dev 5 function 0 "Qumranet Virtio Network" rev 0x00
vio0 at virtio2: address ff:ff:ff:ff:ff:ff
panic: vq_size not power of two: 65535

0:3:0 are virtio-blk and 0:5:0 is virtio-net, as above. 0:2:0 is the extra 
virtio-blk for the .img file.

Any ideas on what change caused this?

Thanks!

-Dustin

Reply via email to