[Bug 273732] 13.2-RELEASE-p3 Linux jails stopped working
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=273732 Bug ID: 273732 Summary: 13.2-RELEASE-p3 Linux jails stopped working Product: Base System Version: 13.2-RELEASE Hardware: amd64 OS: Any Status: New Severity: Affects Only Me Priority: --- Component: bhyve Assignee: virtualization@FreeBSD.org Reporter: courtney.hic...@icloud.com I just updated from FreeBSD 13.2-RELEASE-p2 to FreeBSD 13.2-RELEASE-p3 and suddenly my Linux virtual machines would not boot properly and panic. They seemed to get stuck just after loading PS/2 devices. I wish I had more detail but I don't see anything in the logs. They're Ubuntu 20.04 and Ubuntu 22.04 VMs started with vm-bhyve. -- You are receiving this mail because: You are the assignee for the bug.
[Bug 273732] 13.2-RELEASE-p3 Linux VMs stopped working
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=273732 courtney.hic...@icloud.com changed: What|Removed |Added Summary|13.2-RELEASE-p3 Linux jails |13.2-RELEASE-p3 Linux VMs |stopped working |stopped working -- You are receiving this mail because: You are the assignee for the bug.
[Bug 273732] 13.2-RELEASE-p3 Linux VMs stopped working
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=273732 --- Comment #1 from courtney.hic...@icloud.com --- Got a trace from Ubuntu 20.04 [ 242.706918] INFO: task systemd-udevd:143 blocked for more than 120 seconds. [ 242.707824] Not tainted 5.4.0-162-generic #179-Ubuntu [ 242.708547] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 242.709539] systemd-udevd D0 143141 0x80004004 [ 242.710245] Call Trace: [ 242.710577] __schedule+0x2e3/0x740 [ 242.710902] schedule+0x42/0xb0 [ 242.711313] io_schedule+0x16/0x40 [ 242.711763] do_read_cache_page+0x438/0x840 [ 242.712306] ? file_fdatawait_range+0x30/0x30 [ 242.712868] read_cache_page+0x12/0x20 [ 242.713353] read_dev_sector+0x27/0xd0 [ 242.713841] read_lba+0xbd/0x220 [ 242.714267] ? kmem_cache_alloc_trace+0x1b0/0x240 [ 242.714905] efi_partition+0x1e0/0x700 [ 242.715401] ? vsnprintf+0x39e/0x4e0 [ 242.715871] ? snprintf+0x49/0x60 [ 242.716306] check_partition+0x154/0x250 [ 242.716818] rescan_partitions+0xae/0x280 [ 242.717342] bdev_disk_changed+0x5f/0x70 [ 242.717853] __blkdev_get+0x3e3/0x580 [ 242.718335] blkdev_get+0x3d/0x150 [ 242.718781] __device_add_disk+0x329/0x480 [ 242.719434] device_add_disk+0x13/0x20 [ 242.719930] virtblk_probe+0x4b5/0x847 [virtio_blk] [ 242.720561] virtio_dev_probe+0x195/0x230 [ 242.721083] really_probe+0x159/0x3d0 [ 242.721565] driver_probe_device+0xbc/0x100 [ 242.722109] device_driver_attach+0x5d/0x70 [ 242.722677] __driver_attach+0xa4/0x140 [ 242.722903] ? device_driver_attach+0x70/0x70 [ 242.723468] bus_for_each_dev+0x7e/0xc0 [ 242.723968] driver_attach+0x1e/0x20 [ 242.724435] bus_add_driver+0x161/0x200 [ 242.724935] driver_register+0x74/0xd0 [ 242.725425] register_virtio_driver+0x20/0x30 [ 242.725994] init+0x54/0x1000 [virtio_blk] [ 242.726532] ? 0xc0342000 [ 242.726902] do_one_initcall+0x4a/0x200 [ 242.727404] ? _cond_resched+0x19/0x30 [ 242.727895] ? kmem_cache_alloc_trace+0x1b0/0x240 [ 242.728504] do_init_module+0x52/0x240 [ 242.728989] load_module+0x128d/0x13d0 [ 242.729479] __do_sys_finit_module+0xbe/0x120 [ 242.730040] ? __do_sys_finit_module+0xbe/0x120 [ 242.730902] __x64_sys_finit_module+0x1a/0x20 [ 242.731469] do_syscall_64+0x57/0x190 [ 242.731950] entry_SYSCALL_64_after_hwframe+0x5c/0xc1 [ 242.732600] RIP: 0033:0x7f5b011ac73d [ 242.733066] Code: Bad RIP value. [ 242.733489] RSP: 002b:7ffc02631488 EFLAGS: 0246 ORIG_RAX: 0139 [ 242.734457] RAX: ffda RBX: 561faf5567e0 RCX: 7f5b011ac73d [ 242.734901] RDX: RSI: 7f5b0108cded RDI: 0005 [ 242.735810] RBP: 0002 R08: R09: 561faf535e80 [ 242.736721] R10: 0005 R11: 0246 R12: 7f5b0108cded [ 242.737633] R13: R14: 561faf5513d0 R15: 561faf5567e0 vm-bhyve configuration file loader="uefi" cpu=2 memory=2048M network0_type="virtio-net" network0_switch="public" disk0_type="virtio-blk" disk0_name="disk0.img" grub_run_partition="2" disk1_name="disk1" disk1_type="virtio-blk" disk1_dev="sparse-zvol" uuid="2920ce51-045a-4fa6-8850-c14634fb0bd3" I have a Devuan 4 virtual machine that is stuck at "Unable to enable ACPI" -- You are receiving this mail because: You are the assignee for the bug.
[Bug 273732] 13.2-RELEASE-p3 Linux VMs stopped working
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=273732 Corvin Köhne changed: What|Removed |Added CC||corv...@freebsd.org --- Comment #2 from Corvin Köhne --- Looks like a duplicate of https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=273560 Please make sure to boot bhyve with the -A option. -- You are receiving this mail because: You are the assignee for the bug.
[Bug 273732] 13.2-RELEASE-p3 Linux VMs stopped working
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=273732 --- Comment #3 from courtney.hic...@icloud.com --- Thanks! Looks correct. The solution for me was to apply this patch to /usr/local/lib/vm-bhyve/vm-run https://github.com/churchers/vm-bhyve/pull/525/commits -- You are receiving this mail because: You are the assignee for the bug.