New problem getting FAI to run cleanly with LVM.  I'm sure I'm missing
something, but again I'm not sure what.  There was a similar question on
the list in Nov 2021, but I don't see an answer for it.

<https://lists.uni-koeln.de/pipermail/linux-fai/2021-November/012789.html>

The tail end of my format.log looks like this:

Executing: yes | mdadm --create /dev/md0 --level=raid1 --force --run
 --raid-devices=2 /dev/sdb1 /dev/sda1
Executing: echo frozen | tee /sys/block/md0/md/sync_action
Executing: mkfs.ext4  /dev/md0
Executing: yes | mdadm --create /dev/md1 --level=raid1 --force --run
 --raid-devices=2 /dev/sda3 /dev/sdb3
Executing: echo frozen | tee /sys/block/md1/md/sync_action
Executing: pvcreate -ff -y  /dev/md1
pvcreate -ff -y  /dev/md1 had exit code 5
(STDERR)   Can't open /dev/md1 exclusively.  Mounted filesystem?
(STDERR)   Can't open /dev/md1 exclusively.  Mounted filesystem?
(STDERR)   Error opening device /dev/md1 for reading at 0 length 512.
(STDERR)   Can't open /dev/md1 exclusively.  Mounted filesystem?
(STDERR)   Error opening device /dev/md1 for reading at 0 length 4096.
(STDERR)   Cannot use /dev/md1: device has a signature
Command had non-zero exit code

The server has four drives:  two SSDs that I'm trying to mirror and put
volumes on, and two larger HDDs that FAI should be ignoring.   It does seem
to be ignoring the two HDDs now, thanks to help in the last thread.

$disklist shows up as expected:
root@srv03:/etc/lvm# echo $disklist
sda sdb sdc sdd

I found some suggestions in various places, that mostly seem to be echoing
this redhat article: https://access.redhat.com/solutions/110203 ... but so
far no solutions.  The md device is not mounted yet, and there should be no
reason for FAI holding the device open at this point in the install (that I
can think of, anyway).  `fuser -m -v /dev/md1` indicates no processes
holding the device open.

I have tried making sure any trace of previous data on the drives is gone,
just to make sure pvcreate isn't running up against its own old data, but I
get the same result after each run.  My cleanup log (run between FAI
attempts) is pasted below.  What else should I be trying?

root@srv03:/tmp/fai# pvscan
  PV /dev/md1   VG vg_system       lvm2 [<430.66 GiB / <4.18 GiB free]
  Total: 1 [<430.66 GiB] / in use: 1 [<430.66 GiB] / in no VG: 0 [0   ]

root@srv03:/tmp/fai# dmsetup table
vg_system-home: 0 632250368 linear 9:1 52430848
vg_system-root: 0 52428800 linear 9:1 2048
vg_system-usr: 0 209715200 linear 9:1 684681216

root@srv03:/tmp/fai# lsblk
NAME                 MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                    8:0    0   7.3T  0 disk
├─sda1                 8:1    0   350M  0 part
│ └─md0                9:0    0   349M  0 raid1
├─sda2                 8:2    0    16G  0 part
├─sda3                 8:3    0   7.3T  0 part
│ └─md1                9:1    0   7.3T  0 raid1
│   ├─vg_system-root 254:0    0    25G  0 lvm
│   ├─vg_system-home 254:1    0 301.5G  0 lvm
│   └─vg_system-usr  254:2    0   100G  0 lvm
└─sda4                 8:4    0     1M  0 part
sdb                    8:16   0   7.3T  0 disk
├─sdb1                 8:17   0   350M  0 part
│ └─md0                9:0    0   349M  0 raid1
├─sdb2                 8:18   0    16G  0 part
├─sdb3                 8:19   0   7.3T  0 part
│ └─md1                9:1    0   7.3T  0 raid1
│   ├─vg_system-root 254:0    0    25G  0 lvm
│   ├─vg_system-home 254:1    0 301.5G  0 lvm
│   └─vg_system-usr  254:2    0   100G  0 lvm
└─sdb4                 8:20   0     1M  0 part
sdc                    8:32   0 447.1G  0 disk
sdd                    8:48   0 447.1G  0 disk


root@srv03:/tmp/fai# for dev in vg_system-home vg_system-root
vg_system-usr; do dmsetup remove $dev; done

root@srv03:/tmp/fai# dmsetup table
No devices found

root@srv03:/tmp/fai# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
root@srv03:/tmp/fai# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
root@srv03:/tmp/fai# for dev in sda1 sda3 sdb1 sdb3; do mdadm
--zero-superblock /dev/$dev; done

root@srv03:/tmp/fai# for dev in sda1 sda3 sdb1 sdb3; do mdadm
--zero-superblock /dev/$dev; done
root@srv03:/tmp/fai# for device in $disklist; do wipefs -a /dev/${device};
dd if=/dev/zero of=/dev/${device} bs=512 seek=$(( $(blockdev --getsz
/dev/${device}) - 1024 )) count=1024; done
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41
52 54
/dev/sda: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50
41 52 54
/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sda: calling ioctl to re-read partition table: Success
1024+0 records in
1024+0 records out
524288 bytes (524 kB, 512 KiB) copied, 0.0108355 s, 48.4 MB/s
/dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41
52 54
/dev/sdb: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50
41 52 54
/dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdb: calling ioctl to re-read partition table: Success
1024+0 records in
1024+0 records out
524288 bytes (524 kB, 512 KiB) copied, 0.0527586 s, 9.9 MB/s
1024+0 records in
1024+0 records out
524288 bytes (524 kB, 512 KiB) copied, 0.00516622 s, 101 MB/s
1024+0 records in
1024+0 records out
524288 bytes (524 kB, 512 KiB) copied, 0.00425131 s, 123 MB/s

root@srv03:/tmp/fai# lsblk
NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda    8:0    0   7.3T  0 disk
sdb    8:16   0   7.3T  0 disk
sdc    8:32   0 447.1G  0 disk
sdd    8:48   0 447.1G  0 disk

Antwort per Email an