hello -

i had been experiencing a problem trying to use a bullseye netboot to
reinstall a server's os.  the same configuration worked with a buster
netboot.

when a logical volume exists on a metadevice then that lv was being
activated soon after the completion of `mdadm --assemble --scan --
config=/tmp/fai/mdadm-from-examine.conf`.  this caused the subsequent
`mdadm -W --stop` loop to fail when it reached that md:
(CMD) mdadm -W --stop /dev/md5 1> /tmp/RhKvizyXZk 2> /tmp/DrTvcNhaf6
Executing: mdadm -W --stop /dev/md5
(STDERR) mdadm: Cannot get exclusive access to /dev/md5:Perhaps a
running process, mounted filesystem or active volume group?
mdadm -W --stop /dev/md5 had exit code 1
(STDERR) mdadm: Cannot get exclusive access to /dev/md5:Perhaps a
running process, mounted filesystem or active volume group?
Command had non-zero exit code

eventually i found the udev rule that triggers the difference.  to
revert to the older behavior i copied /lib/udev/rules.d/69-lvm-
metad.rules into /etc/udev/rules.d and applied the following patch:
--- /srv/fai/nfsroot/bullseye-amd64/etc/udev/rules.d/69-lvm-metad.rules.orig 
2021-02-22 13:39:14.000000000 -0800
+++ /srv/fai/nfsroot/bullseye-amd64/etc/udev/rules.d/69-lvm-metad.rules 
2022-09-01 19:22:52.426117170 -0700
@@ -75,8 +75,7 @@

 ENV{SYSTEMD_READY}="1"

-TEST!="/run/systemd/system", GOTO="direct_pvscan"
-TEST=="/run/systemd/system", GOTO="systemd_background"
+GOTO="systemd_background"

 LABEL="systemd_background"

further down it is noted that the direct_pvscan mode is not used and
should be removed.  but it seems that since there is no systemd in
fai's bullseye nfsroot it currently is the default.  in buster the
method for invoking pvscan is apparently selected at build time and
defaults to systemd_background.  if/when FAI migrates to systemd this
may raise its head again.

hope this helps if someone else has a similar issue.

        andy

-- 
andrew bezella <abeze...@archive.org>
internet archive
# config file for setup-storage
#
# disabling both -c (mount-count-dependent) and -i (time-dependent)
#   checking
#
# <type> <mountpoint> <size>   <fs type> <mount options> <misc options>

disk_config disk1 disklabel:gpt-bios align-at:1M

primary -       32G     -       -
primary -       32G     -       -
primary -       8G      -       -

disk_config disk2 sameas:disk1

disk_config raid fstabkey:uuid

raid1   /       disk1.1,disk2.1 ext4    defaults,errors=remount-ro      
mdcreateopts="--metadata=1.2 --assume-clean --bitmap=internal" createopts="-G 
256 -L root" tuneopts="-c 0 -i 0"
raid1   swap    disk1.2,disk2.2 swap    sw                              
mdcreateopts="--metadata=1.2 --assume-clean --bitmap=internal"
raid1   /tmp    disk1.3,disk2.3 ext4    defaults,nosuid,nodev,noatime   
mdcreateopts="--metadata=1.2 --assume-clean --bitmap=internal" createopts="-G 
256 -L tmp" tuneopts="-c 0 -i 0"

Antwort per Email an