# Summary With default settings, LVM autoactivates LVs when it sees a new VG, e.g. after boot or iSCSI login. In a cluster with guest disks on a shared LVM VG (e.g. on top of iSCSI/Fibre Channel (FC)/direct-attached SAS), this can indirectly cause guest creation or migration to fail. See bug #4997 [1] and patch #1 for details.
The primary goal of this series is to avoid autoactivating thick LVs that hold guest disks in order to fix #4997. For this, it patches the LVM storage plugin to create new LVs with autoactivation disabled, and implements a pve8to9 check and subcommand to disable autoactivation on existing LVs (see below for details). The series does the same for LVM-thin storages. While LVM-thin storages are inherently local and cannot run into #4997, it can still make sense to avoid unnecessarily activating thin LVs at boot. # Why not before PVE 9 As discussed in v2, we should only apply this series for PVE 9, as we can be sure all nodes are running at least PVE 8 then. Here is why we couldn't apply it for PVE 8 already. PVE 7/Bullseye's LVM does not know `--setautoactivation`. A user upgrading from PVE 7 will temporarily have a mixed 7/8 cluster. Once this series is applied, the PVE 8 nodes will create new LVs with `--setautoactivation n`, which the PVE 7 nodes do not know. In my tests, the PVE 7 nodes can read/interact with such LVs just fine, *but*: As soon as a PVE 7 node creates a new (unrelated) LV, the `--setautoactivation n` flag is reset to default `y` on *all* LVs of the VG. I presume this is because creating a new LV rewrites metadata, and the PVE 7 LVM doesn't write out the `--setautoactivation n` flag. I imagine (have not tested) this will cause problems on a mixed cluster. # pve8to9 script As discussed in v2, this series implements (a) a pve8to9 check to detect thick and thin LVs with autoactivation enabled (b) a script to disable autoactivation on LVs when needed, intended to be run manually by the user during 8->9 upgrade The question is where to put the script (b). Patch #4 moves the existing checks from `pve8to9` to `pve8to9 checklist`, to be able to implement (b) as a new subcommand `pve8to9 updatelvm`. I realize this is a huge user-facing change, and we don't have to go with this approach. It is also incomplete, as patch #5 doesn't update the manpage yet. However, I like about this approach that pve8to9 bundles "tasks that are related to 8->9 upgrades". If we do decide to go with this, I can send another patch to update the manpage as well as add documentation. # Bonus fix for FC/SAS multipath+LVM issue As it turns out, this series seems to additionally fix an issue on hosts with LVM on FC/SAS-attached LUNs *with multipath* where LVM would report "Device mismatch detected" warnings because the LVs are activated too early in the boot process before multipath is available. Our current suggested workaround is to install multipath-tools-boot [2]. With this series applied and when users have upgraded to 9, this shouldn't be necessary anymore, as LVs are not auto-activated after boot. # Interaction with zfs-initramfs zfs-initramfs used to ship an an initramfs-tools script that unconditionally activates *all* VGs that are visible at boot time, ignoring the autoactivation flag. A fix was already applied in v2 [3]. # Patch summary - Patch #1 makes the LVM plugin create new LVs with `--setautoactivation n` - Patch #2 makes the LVM-thin plugin disable autoactivation for new LVs - Patch #3 runs perltidy on pve8to9, can be dropped - Patch #4 moves pve8to9 checks to a subcommand (see pve8to9 section above) - Patch #5 adds a pve8to9 subcommand to disable autoactivation (see pve8to9 section above) # Changes since v3 - rebase and run perltidy - see individual patches for other changes - @Michael tested+reviewed v3 (thx!), but since there were some code changes, I'm not including the trailers here, even though the changes on #1+#2 were mostly comments/formatting changes. # Changes since v2 - drop zfsonlinux patch that was since applied - add patches for LVM-thin - add pve8to9 patches v3: https://lore.proxmox.com/pve-devel/20250429113646.25738-1-f.we...@proxmox.com/ v2: https://lore.proxmox.com/pve-devel/20250307095245.65698-1-f.we...@proxmox.com/ v1: https://lore.proxmox.com/pve-devel/20240111150332.733635-1-f.we...@proxmox.com/ [1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997 [2] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#%22Device_mismatch_detected%22_warnings [3] https://lore.proxmox.com/pve-devel/ad4c806c-234a-4949-885d-8bb369860...@proxmox.com/ pve-storage: Friedrich Weber (2): fix #4997: lvm: create: disable autoactivation for new logical volumes lvmthin: disable autoactivation for new logical volumes src/PVE/Storage/LVMPlugin.pm | 13 ++++++++++++- src/PVE/Storage/LvmThinPlugin.pm | 17 ++++++++++++++++- 2 files changed, 28 insertions(+), 2 deletions(-) pve-manager: Friedrich Weber (3): pve8to9: run perltidy pve8to9: move checklist to dedicated subcommand pve8to9: detect and (if requested) disable LVM autoactivation PVE/CLI/pve8to9.pm | 178 ++++++++++++++++++++++++++++++++++++++++++++- bin/Makefile | 2 +- 2 files changed, 175 insertions(+), 5 deletions(-) Summary over all repositories: 4 files changed, 203 insertions(+), 7 deletions(-) -- Generated by git-murpp 0.8.1 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel