ufs_dirbad: /mnt: bad dir ino 50626 at offset 0: mangled entry
In one of my VMs, I do a bunch of testing to a nano image on /dev/md0 and increasingly I am getting a panic on umounting the file system The image is just truncate -s 5G /tmp/junk.bin mdconfig -f /tmp/junk.bin and then I dd my nano image, mount it, make some configuration changes and then umount it. After cycles of this, unmounts, mounts etc, I get panic: ufs_dirbad: /mnt: bad dir ino 50626 at offset 0: mangled entry cpuid = 1 time = 1634912598 KDB: stack backtrace: #0 0x80c77035 at kdb_backtrace+0x65 #1 0x80c28a47 at vpanic+0x187 #2 0x80c288b3 at panic+0x43 #3 0x80f345f7 at ufs_lookup_ino+0xdc7 #4 0x80ceae1d at vfs_cache_lookup+0xad #5 0x80cf872c at lookup+0x46c #6 0x80cf785c at namei+0x26c #7 0x80d1c979 at vn_open_cred+0x509 #8 0x80d12a3e at kern_openat+0x26e #9 0x810b4d3c at amd64_syscall+0x10c #10 0x8108bdab at fast_syscall_common+0xf8 Uptime: 4m48s I forced an fsck on reboot, and the file system is clean. I also deleted /mnt and recreated the directory and still get this issue. Any idea what might be causing it or how I can better track it down ? ---Mike
Re: ufs_dirbad: /mnt: bad dir ino 50626 at offset 0: mangled entry
On Fri, Oct 22, 2021 at 10:36:01AM -0400, mike tancsa wrote: > In one of my VMs, I do a bunch of testing to a nano image on /dev/md0 > and increasingly I am getting a panic on umounting the file system > > The image is just > > truncate -s 5G /tmp/junk.bin > mdconfig -f /tmp/junk.bin > > and then I dd my nano image, mount it, make some configuration changes > and then umount it. > > After cycles of this, unmounts, mounts etc, I get > > > panic: ufs_dirbad: /mnt: bad dir ino 50626 at offset 0: mangled entry > cpuid = 1 > time = 1634912598 > KDB: stack backtrace: > #0 0x80c77035 at kdb_backtrace+0x65 > #1 0x80c28a47 at vpanic+0x187 > #2 0x80c288b3 at panic+0x43 > #3 0x80f345f7 at ufs_lookup_ino+0xdc7 > #4 0x80ceae1d at vfs_cache_lookup+0xad > #5 0x80cf872c at lookup+0x46c > #6 0x80cf785c at namei+0x26c > #7 0x80d1c979 at vn_open_cred+0x509 > #8 0x80d12a3e at kern_openat+0x26e > #9 0x810b4d3c at amd64_syscall+0x10c > #10 0x8108bdab at fast_syscall_common+0xf8 > Uptime: 4m48s > > I forced an fsck on reboot, and the file system is clean. I also deleted > /mnt and recreated the directory and still get this issue. Any idea > what might be causing it or how I can better track it down ? Is the VM image stored on ZFS? Which kernel revision is the host running?
Re: ufs_dirbad: /mnt: bad dir ino 50626 at offset 0: mangled entry
On 10/22/2021 11:21 AM, Mark Johnston wrote: On Fri, Oct 22, 2021 at 10:36:01AM -0400, mike tancsa wrote: After cycles of this, unmounts, mounts etc, I get panic: ufs_dirbad: /mnt: bad dir ino 50626 at offset 0: mangled entry cpuid = 1 time = 1634912598 KDB: stack backtrace: #0 0x80c77035 at kdb_backtrace+0x65 #1 0x80c28a47 at vpanic+0x187 #2 0x80c288b3 at panic+0x43 #3 0x80f345f7 at ufs_lookup_ino+0xdc7 #4 0x80ceae1d at vfs_cache_lookup+0xad #5 0x80cf872c at lookup+0x46c #6 0x80cf785c at namei+0x26c #7 0x80d1c979 at vn_open_cred+0x509 #8 0x80d12a3e at kern_openat+0x26e #9 0x810b4d3c at amd64_syscall+0x10c #10 0x8108bdab at fast_syscall_common+0xf8 Uptime: 4m48s I forced an fsck on reboot, and the file system is clean. I also deleted /mnt and recreated the directory and still get this issue. Any idea what might be causing it or how I can better track it down ? Is the VM image stored on ZFS? Which kernel revision is the host running? Hi, The VM is in an Ubuntu host / KVM (Linux ubuntu1 5.11.0-38-generic #42-Ubuntu SMP Fri Sep 24 ) with the VM image stored on its local zfs file system. Do you think its some strange interaction with the hypervisor ? The FreeBSD guest VM is RELENG_13 as of this morning, GENERIC kernel stable/13-d8359af5b ---Mike
Re: ufs_dirbad: /mnt: bad dir ino 50626 at offset 0: mangled entry
On Fri, Oct 22, 2021 at 11:33:22AM -0400, mike tancsa wrote: > On 10/22/2021 11:21 AM, Mark Johnston wrote: > > On Fri, Oct 22, 2021 at 10:36:01AM -0400, mike tancsa wrote: > >> After cycles of this, unmounts, mounts etc, I get > >> > >> > >> panic: ufs_dirbad: /mnt: bad dir ino 50626 at offset 0: mangled entry > >> cpuid = 1 > >> time = 1634912598 > >> KDB: stack backtrace: > >> #0 0x80c77035 at kdb_backtrace+0x65 > >> #1 0x80c28a47 at vpanic+0x187 > >> #2 0x80c288b3 at panic+0x43 > >> #3 0x80f345f7 at ufs_lookup_ino+0xdc7 > >> #4 0x80ceae1d at vfs_cache_lookup+0xad > >> #5 0x80cf872c at lookup+0x46c > >> #6 0x80cf785c at namei+0x26c > >> #7 0x80d1c979 at vn_open_cred+0x509 > >> #8 0x80d12a3e at kern_openat+0x26e > >> #9 0x810b4d3c at amd64_syscall+0x10c > >> #10 0x8108bdab at fast_syscall_common+0xf8 > >> Uptime: 4m48s > >> > >> I forced an fsck on reboot, and the file system is clean. I also deleted > >> /mnt and recreated the directory and still get this issue. Any idea > >> what might be causing it or how I can better track it down ? > > Is the VM image stored on ZFS? Which kernel revision is the host running? > > Hi, > > The VM is in an Ubuntu host / KVM (Linux ubuntu1 5.11.0-38-generic > #42-Ubuntu SMP Fri Sep 24 ) with the VM image stored on its local zfs > file system. Do you think its some strange interaction with the > hypervisor ? The FreeBSD guest VM is RELENG_13 as of this morning, > GENERIC kernel stable/13-d8359af5b I was seeing similar unexpected UFS corruption in a VM that appears to be the result of a ZFS regression on the host. But said ZFS regression is likely FreeBSD-specific and is also relatively new.
Re: IPv6 inconsistent local routing
Friends, thanks to the good vibrations, this is going easier than I thought. The problem is already well known, and is actually intended behaviour (while probably not "works as designed"). The BPF didn't work with these circumstances either, but that one was fixed: https://reviews.freebsd.org/rS162539 And here a brave warrior figured out that the whole thing is essentially wrong, and tried to solve it - but, sadly, didn't perdure. Nevertheless it gives a good read to understand it all: https://reviews.freebsd.org/D3868 And WOW! - here is exactly my bug: kern/165190 So, it seems like this is one of those things that are not really good, and people have agreed to just push them under the carpet and go on with life. Similar the sched-ule stuff that boils up here every 7-8 months, but nothing will ever happen. (And you know that I have a fix for that one.) So what I am going to do is just follow the strategy of change 162539 and fix my rcvif name alongside. Think that should work, and avoid the problem with the scoped linklocals. So then I have a fix for this one also. Cheerio, PMc
FreeBSD 12.3-BETA1 Now Available
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 The first BETA build of the 12.3-RELEASE release cycle is now available. Installation images are available for: o 12.3-BETA1 amd64 GENERIC o 12.3-BETA1 i386 GENERIC o 12.3-BETA1 powerpc GENERIC o 12.3-BETA1 powerpc64 GENERIC64 o 12.3-BETA1 powerpcspe MPC85XXSPE o 12.3-BETA1 sparc64 GENERIC o 12.3-BETA1 armv6 RPI-B o 12.3-BETA1 armv7 BANANAPI o 12.3-BETA1 armv7 BEAGLEBONE o 12.3-BETA1 armv7 CUBIEBOARD o 12.3-BETA1 armv7 CUBIEBOARD2 o 12.3-BETA1 armv7 CUBOX-HUMMINGBOARD o 12.3-BETA1 armv7 RPI2 o 12.3-BETA1 armv7 WANDBOARD o 12.3-BETA1 armv7 GENERICSD o 12.3-BETA1 aarch64 GENERIC o 12.3-BETA1 aarch64 RPI3 o 12.3-BETA1 aarch64 PINE64 o 12.3-BETA1 aarch64 PINE64-LTS Note regarding arm SD card images: For convenience for those without console access to the system, a freebsd user with a password of freebsd is available by default for ssh(1) access. Additionally, the root user password is set to root. It is strongly recommended to change the password for both users after gaining access to the system. Installer images and memory stick images are available here: https://download.freebsd.org/ftp/releases/ISO-IMAGES/12.3/ The image checksums follow at the end of this e-mail. If you notice problems you can report them through the Bugzilla PR system or on the -stable mailing list. If you would like to use SVN to do a source based update of an existing system, use the "releng/12.3" branch. Please note, the release notes page is not yet complete, and will be updated on an ongoing basis as the 12.3-RELEASE cycle progresses. === Virtual Machine Disk Images === VM disk images are available for the amd64, i386, and aarch64 architectures. Disk images may be downloaded from the following URL (or any of the FreeBSD download mirrors): https://download.freebsd.org/ftp/releases/VM-IMAGES/12.3-BETA1/ The partition layout is: ~ 16 kB - freebsd-boot GPT partition type (bootfs GPT label) ~ 1 GB - freebsd-swap GPT partition type (swapfs GPT label) ~ 20 GB - freebsd-ufs GPT partition type (rootfs GPT label) The disk images are available in QCOW2, VHD, VMDK, and raw disk image formats. The image download size is approximately 135 MB and 165 MB respectively (amd64/i386), decompressing to a 21 GB sparse image. Note regarding arm64/aarch64 virtual machine images: a modified QEMU EFI loader file is needed for qemu-system-aarch64 to be able to boot the virtual machine images. See this page for more information: https://wiki.freebsd.org/arm64/QEMU To boot the VM image, run: % qemu-system-aarch64 -m 4096M -cpu cortex-a57 -M virt \ -bios QEMU_EFI.fd -serial telnet::,server -nographic \ -drive if=none,file=VMDISK,id=hd0 \ -device virtio-blk-device,drive=hd0 \ -device virtio-net-device,netdev=net0 \ -netdev user,id=net0 Be sure to replace "VMDISK" with the path to the virtual machine image. === Amazon EC2 AMI Images === FreeBSD/amd64 EC2 AMIs are available in the following regions: af-south-1 region: ami-07d05ad414a9f8472 eu-north-1 region: ami-0e011616d99aab6e9 ap-south-1 region: ami-0353784d9a6017aa6 eu-west-3 region: ami-0afb2f8402c4a82fa eu-west-2 region: ami-02918a435e0cd98fc eu-south-1 region: ami-0a7dc024862ea63f0 eu-west-1 region: ami-07107404446b92479 ap-northeast-3 region: ami-07066f83a06ddab23 ap-northeast-2 region: ami-0366001ff916b395c me-south-1 region: ami-078b85654cacc486f ap-northeast-1 region: ami-07dedd26af360eb64 sa-east-1 region: ami-09b3c75c9a31d2c60 ca-central-1 region: ami-092ca358203a56a48 ap-east-1 region: ami-01a19c91b2a8cad6a ap-southeast-1 region: ami-0a6f43ad9cdd110ef ap-southeast-2 region: ami-05bc4ddd4a05d44f6 eu-central-1 region: ami-039b62d8c5b9bbf68 us-east-1 region: ami-018c14e4041a50e8f us-east-2 region: ami-0320212114becb677 us-west-1 region: ami-04a0a51749650cd62 us-west-2 region: ami-0a7446dac684eca8e FreeBSD/aarch64 EC2 AMIs are available in the following regions: af-south-1 region: ami-0bfcc3dba1b319179 eu-north-1 region: ami-014f963325c296197 ap-south-1 region: ami-09b6d3572900499e7 eu-west-3 region: ami-0bffd6e960f8d7048 eu-west-2 region: ami-0c1fcee0afa4cc2b9 eu-south-1 region: ami-0ac31b0c2e7d69dee eu-west-1 region: ami-008a582950360f0dd ap-northeast-3 region: ami-062dc1ae7a380682b ap-northeast-2 region: ami-0b4a3b6ca1dcbaa69 me-south-1 region: ami-0dae6b7d6944e9248 ap-northeast-1 region: ami-021db817524d42d97 sa-east-1 region: ami-08ee7de984de07ef8 ca-central-1 region: ami-03277e073a30109e4 ap-east-1 region: ami-02eab2702b4e82760 ap-southeast-1 region: ami-0153729447991a249 ap-southeast-2 region: ami-0e2f30365f526e69c eu-central-1 region: ami-01dcbf513c6435645 us-east-1 region: ami-0f959f978f570ef51 us-east-2 region: ami-0ead26896a7b2a200 us-west-1 region: ami-004fdbb3b57fcec46 us-west-2 region: ami-0d3c217a0ca1caae1 === Vagrant Images === FreeBSD/amd64 images are ava