[Bug 262292] Seemingly not possible for IPv6 to function over tap devices on if_bridge
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262292 Andrey V. Elsukov changed: What|Removed |Added CC||a...@freebsd.org --- Comment #6 from Andrey V. Elsukov --- I have similar configuration on FreeBSD-13 and it works well. Host system has if_bridge with configured fe80::1 as IPv6 LLA and some IPv6 GA, it uses two tap interfaces. Bhyve VMs use fe80::1%vtnet0 as default gateway and IPv6 GA from the same prefix as GA from if_bridge. -- You are receiving this mail because: You are the assignee for the bug.
[Bug 262292] Seemingly not possible for IPv6 to function over tap devices on if_bridge
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262292 punkt.de Hosting Team changed: What|Removed |Added CC||m...@punkt.de --- Comment #7 from punkt.de Hosting Team --- No problem whatsoever. If you place all layer 3 addresses on the bridge interfaces and not on any member. This is mandatory according to the handbook section on bridging and has been discussed in half a dozen of other tickets. FreeBSD 13.3 and up available for testing, but used to "just work" in 11.x, 12.x, too. Kind regards, Patrick -- You are receiving this mail because: You are the assignee for the bug.
A way to have a console (aarch64) under macOS Parallels: build the kernel with nodevice virtio_gpu; any way with an official kernel build?
I've been testing using FreeBSD under Parallels on a MacBook Pro M4 MAX, although the issue below and its handling may not be specific to aarch64 contexts. After (from a demsg -a from a verbose boot): . . . 000.78 [ 452] vtnet_netmap_attach vtnet attached txq=1, txd=128 rxq=1, rxd=128 pci0: at device 9.0 (no driver attached) virtio_pci1: mem 0x1000-0x17ff,0x18008000-0x18008fff,0x1800-0x18003fff at device 10.0 on pci0 vtgpu0: on virtio_pci1 virtio_pci1: host features: 0x1 virtio_pci1: negotiated features: 0x1 virtio_pci1: attempting to allocate 1 MSI-X vectors (2 supported) virtio_pci1: attempting to allocate 2 MSI-X vectors (2 supported) pcib0: matched entry for 0.10.INTA pcib0: slot 10 INTA hardwired to IRQ 39 virtio_pci1: using legacy interrupt VT: Replacing driver "efifb" with new "virtio_gpu". I end have no console. I ended up in a state where it turned out booting went to stand-alone mode for a manual fsck. So: no ssh access or any other access. I ended up using the Windows Dev Kit 2023 with the boot device in order figure out what was going on and to the the needed fsck. Turns out that if I'm building, installing, and booting my own kernel, there is a way around that replacement of efifb by using: nodevicevirtio_gpu in the kernel configuration, so that the boot ends up using efifb (no replacement). If course, this does not help with kernels from official FreeBSD builds. Is there a way to disable virtio_gpu for something that runs an official kernel build (where virtio_gpu is built into the kernel)? === Mark Millard marklmi at yahoo.com
Re: A way to have a console (aarch64) under macOS Parallels: build the kernel with nodevice virtio_gpu; any way with an official kernel build?
> On Feb 13, 2025, at 14:55, Warner Losh wrote: > >> On Thu, Feb 13, 2025, 3:40 PM Mark Millard wrote: >> I've been testing using FreeBSD under Parallels on a MacBook Pro M4 MAX, >> although the issue below and its handling may not be specific to aarch64 >> contexts. >> >> After (from a demsg -a from a verbose boot): >> >> . . . >> 000.78 [ 452] vtnet_netmap_attach vtnet attached txq=1, txd=128 >> rxq=1, rxd=128 >> pci0: at device 9.0 (no driver attached) >> virtio_pci1: mem >> 0x1000-0x17ff,0x18008000-0x18008fff,0x1800-0x18003fff at device >> 10.0 on pci0 >> vtgpu0: on virtio_pci1 >> virtio_pci1: host features: 0x1 >> virtio_pci1: negotiated features: 0x1 >> virtio_pci1: attempting to allocate 1 MSI-X vectors (2 supported) >> virtio_pci1: attempting to allocate 2 MSI-X vectors (2 supported) >> pcib0: matched entry for 0.10.INTA >> pcib0: slot 10 INTA hardwired to IRQ 39 >> virtio_pci1: using legacy interrupt >> VT: Replacing driver "efifb" with new "virtio_gpu". >> >> I end have no console. I ended up in a state where it >> turned out booting went to stand-alone mode for a manual >> fsck. So: no ssh access or any other access. I ended up >> using the Windows Dev Kit 2023 with the boot device in >> order figure out what was going on and to the the needed >> fsck. >> >> Turns out that if I'm building, installing, and booting >> my own kernel, there is a way around that replacement >> of efifb by using: >> >> nodevicevirtio_gpu >> >> in the kernel configuration, so that the boot ends up >> using efifb (no replacement). >> >> If course, this does not help with kernels from official >> FreeBSD builds. >> >> Is there a way to disable virtio_gpu for something that >> runs an official kernel build (where virtio_gpu is >> built into the kernel)? > > > boot_serial=no > > In loader.conf? How would that lead to not doing: VT: Replacing driver "efifb" with new "virtio_gpu". ? Using the menu, all 4 combinations stopped in the same place and way until I built and used a kernel that did not have virtio_gpu at all, if I remember right. Both efifb and virtio_gpu seem to be for the video side of the alternatives. (It seems that something more is needed for virtio_gpu to end up providing a console, if it can. May be the Parallels Toolbox for an actual Linux provides what is missing for that kind of context? I'm not after X11 or such, just having an operational console for seeing information and dealing with problems when ssh can not be used.) I've no clue if the issue is specific to just Parallels or not: I've really only used Hyper-V (only getting it working for FreeBSD as a guest OS on amd64) and Parallels (aarch64 currently). So I do not know if it would be worth a tunable to, say, set the vd_priority offset from VD_PRIORITY_GENERIC, such that it could end up not replacing efifb. (I looked in the source code a little bit for this message.) === Mark Millard marklmi at yahoo.com