On 10/06/2025 16:25, Michael Köppl wrote: > Left 2 comments inline. The changes overall look good to me and worked > during my tests. > > On 4/29/25 13:36, Friedrich Weber wrote: >> Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with >> the `--setautoactivation n` flag to fix #4997 [1]. However, this does >> not affect already existing LVs of setups upgrading from PVE 8. >> >> Hence, add a new `updatelvm` subcommand to `pve8to9` that finds guest >> volume LVs with autoactivation enabled on LVM and LVM-thin storages, >> and disables autoactivation on each of them. This is done while >> holding a lock on the storage, to avoid metadata update conflicts for >> shared LVM storages. >> >> Also, check for guest volume LVs with autoactivation enabled during >> the `checklist` command, and suggest to run `updatelvm` if necessary. >> The check is done without a lock, as taking a lock doesn't have >> advantages here. >> >> While some users may have disabled autoactivation on the VG level to >> work around #4997, consciously do not take this into account: >> Disabling autoactivation on LVs too does not hurt, and avoids issues >> in case the user decides to re-enable autoactivation on the VG level >> in the future. >> >> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997 >> >> Signed-off-by: Friedrich Weber <f.we...@proxmox.com> >> --- >> >> Notes: >> open questions: >> >> - if users create new LVs on PVE 8 nodes after running `pve8to9 >> updatelvm` but before actually upgrading to PVE 9, these will still >> be created with autoactivation enabled. Is this a case we want to >> consider? If yes, we could suggest running `pve8to updatelvm` (only) >> post-upgrade on all nodes, but users may forget this... > > Pondered a bit on this, but I'm not sure if it really makes sense to > consider this case. It should be clear that if I create LVs after > running `updatelvm`, the new LVs will still behave like they always have > on PVE 8. I think someone forgetting to run `updatelvm` post-upgrade is > far more likely than someone running the checklist for PVE 9, continuing > to create new LVs, and only then upgrade to PVE 9.
Agree. If we suggest to run `updatelvm` *only* post-upgrade, users may forget to run it at all, so suggesting to run it pre-upgrade seems safer. If users do create an LV post-`updatelvm` but pre-upgrade to 9, the fallout (triggering #4997) is limited to this particular LV, and can be fixed relatively easily when they see #4997 occur again. >> - `updatelvm` will disable autoactivation on all guest volumes, >> regardless of the guest owner. In other words, it will disable >> autoactivation on LVs owned by a different node. Is this a problem? >> My tests suggests it's unproblematic, but may be worth a second >> look. We can change `updatelvm` to only update LVs owned by the >> local node, but then might miss LVs that are migrated between >> `updatelvm` runs on different nodes. >> >> new in v3 >> >> PVE/CLI/pve8to9.pm | 152 ++++++++++++++++++++++++++++++++++++++++++++- >> 1 file changed, 151 insertions(+), 1 deletion(-) >> >> diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm >> index fbb96491..972c91ea 100644 >> --- a/PVE/CLI/pve8to9.pm >> +++ b/PVE/CLI/pve8to9.pm >> @@ -22,7 +22,7 @@ use PVE::NodeConfig; >> use PVE::RPCEnvironment; >> use PVE::Storage; >> use PVE::Storage::Plugin; >> -use PVE::Tools qw(run_command split_list file_get_contents); >> +use PVE::Tools qw(run_command split_list file_get_contents trim); >> use PVE::QemuConfig; >> use PVE::QemuServer; >> use PVE::VZDump::Common; >> @@ -1372,6 +1372,95 @@ sub check_dkms_modules { >> } >> } >> >> +my $query_lvm_lv_autoactivation = sub { >> + my ($vg_name) = @_; >> + >> + my $cmd = [ >> + '/sbin/lvs', >> + '--separator', ':', >> + '--noheadings', >> + '--unbuffered', >> + '--options', "lv_name,autoactivation", >> + $vg_name, >> + ]; >> + >> + my $result; >> + eval { >> + run_command($cmd, outfunc => sub { >> + my $line = shift; >> + $line = trim($line); >> + >> + my ($name, $autoactivation_flag) = split(':', $line); >> + return if !$name; >> + >> + $result->{$name} = $autoactivation_flag eq 'enabled'; >> + }); >> + }; >> + die "could not list LVM logical volumes: $@\n" if $@; >> + return $result; >> +}; >> + >> +my $foreach_lvm_vg_with_autoactivated_guest_volumes = sub { >> + my ($with_lock, $code) = @_; >> + >> + my $cfg = PVE::Storage::config(); >> + my $storage_info = PVE::Storage::storage_info($cfg); >> + >> + for my $storeid (sort keys %$storage_info) { >> + my $scfg = PVE::Storage::storage_config($cfg, $storeid); >> + my $vgname = $scfg->{vgname}; >> + my $type = $scfg->{type}; >> + next if $type ne 'lvm' && $type ne 'lvmthin'; >> + >> + my $info = $storage_info->{$storeid}; >> + if (!$info->{enabled} || !$info->{active}) { >> + log_skip("storage '$storeid' ($type) is disabled or inactive"); >> + next; >> + } >> + >> + my $find_autoactivated_lvs = sub { >> + my $lvs_autoactivation = $query_lvm_lv_autoactivation->($vgname); >> + my $vollist = PVE::Storage::volume_list($cfg, $storeid); >> + >> + my $autoactivated_guest_lvs = []; >> + for my $volinfo (@$vollist) { >> + my $volname = >> (PVE::Storage::parse_volume_id($volinfo->{volid}))[1]; >> + push @$autoactivated_guest_lvs, $volname if >> $lvs_autoactivation->{$volname}; >> + } >> + >> + $code->($storeid, $vgname, $autoactivated_guest_lvs); >> + }; >> + >> + if ($with_lock) { >> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); >> + $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, >> + $find_autoactivated_lvs); >> + } else { >> + $find_autoactivated_lvs->(); >> + } >> + }; >> +}; >> + >> +sub check_lvm_autoactivation { >> + log_info("Check for LVM autoactivation settings on 'lvm' and 'lvmthin' >> storages..."); >> + >> + my $needs_fix = 0; >> + $foreach_lvm_vg_with_autoactivated_guest_volumes->(0, sub { >> + my ($storeid, $vgname, $autoactivated_lvs) = @_; >> + >> + if (scalar(@$autoactivated_lvs) > 0) { >> + log_warn("storage '$storeid' has guest volumes with autoactivation >> enabled"); >> + $needs_fix = 1; >> + } else { >> + log_pass("all guest volumes on storage '$storeid' have >> autoactivation disabled"); >> + } >> + }); >> + log_warn("please run 'pve8to9 updatelvm' to disable autoactivation on >> LVM guest volumes") > > I think for users it would help if some additional info was added to > this message. Disabling autoactivation sounds like it might affect how > LVs work (do I need to do something if they're not autoactivated?, etc.) > if you're not familiar with the inner workings. Maybe this could also > mention that the storage stack will take care of activating/deactivating > when needed? Just a suggestion since I noticed it during my testing. Makes sense -- it's not really obvious to users that disabled autoactivation is the desired state. I guess the `pve8to9` output shouldn't be cluttered with too much detail, but maybe something like this could work: > PVE 9 will disable autoactivation for all newly created LVM guest > volumes. Please run 'pve8to9 updatelvm' to disable autoactivation for > existing LVM guest volumes In addition we can provide more details in the upgrade guide. > >> + if $needs_fix; >> + >> + return undef; >> +} >> + >> sub check_misc { >> print_header("MISCELLANEOUS CHECKS"); >> my $ssh_config = eval { >> PVE::Tools::file_get_contents('/root/.ssh/config') }; >> @@ -1474,6 +1563,7 @@ sub check_misc { >> check_nvidia_vgpu_service(); >> check_bootloader(); >> check_dkms_modules(); >> + check_lvm_autoactivation(); >> } >> >> my sub colored_if { >> @@ -1538,8 +1628,68 @@ __PACKAGE__->register_method ({ >> return undef; >> }}); >> >> +__PACKAGE__->register_method ({ >> + name => 'updatelvm', >> + path => 'updatelvm', >> + method => 'POST', >> + description => 'disable autoactivation for all LVM guest volumes', >> + parameters => { >> + additionalProperties => 0, >> + }, >> + returns => { type => 'null' }, >> + code => sub { >> + my ($param) = @_; >> + >> + my $did_work = 0; >> + eval { >> + $foreach_lvm_vg_with_autoactivated_guest_volumes->(1, sub { >> + my ($storeid, $vgname, $autoactivated_lvs) = @_; >> + >> + if (scalar(@$autoactivated_lvs) == 0) { >> + log_pass("all guest volumes on storage '$storeid' have >> autoactivation disabled"); >> + return; >> + } >> + >> + log_info("disabling autoactivation on storage '$storeid'..."); >> + die "unexpected empty VG name (storage '$storeid')\n" if >> !$vgname; >> + >> + for my $lvname (@$autoactivated_lvs) { >> + log_info("disabling autoactivation for $vgname/$lvname" >> + ." on storage '$storeid'..."); >> + my $cmd = [ >> + '/sbin/lvchange', >> + '--setautoactivation', 'n', >> + "$vgname/$lvname", >> + ]; >> + >> + eval { run_command($cmd); }; >> + my $err = $@; >> + die "could not disable autoactivation on $vgname/$lvname: >> $err\n" if $err; >> + >> + $did_work = 1; >> + } >> + }); >> + }; >> + my $err = $@; >> + if ($err) { >> + log_fail("could not disable autoactivation on enabled and active >> LVM storages"); >> + die $err; >> + } >> + >> + if($did_work) { >> + log_pass("successfully disabled autoactivation on guest volumes on >> all" >> + ." enabled and active LVM storages"); >> + } else { >> + log_pass("all guest volumes on enabled and active LVM storages" >> + ." have autoactivation disabled"); >> + }; >> + >> + return undef; >> + }}); >> + >> our $cmddef = { >> checklist => [ __PACKAGE__, 'checklist', []], >> + updatelvm => [ __PACKAGE__, 'updatelvm', []], >> }; >> >> 1; > _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel