When a template with disks on LVM is cloned to another node, the volumes
are first activated, then cloned and deactivated again after cloning.

However, if clones of this template are now created in parallel to other
nodes, it can happen that one of the tasks can no longer deactivate the
logical volume because it is still in use.  The reason for this is that
we use a shared lock.
Since the failed deactivation does not necessarily have consequences, we
downgrade the error to a warning, which means that the clone tasks will
continue to be completed successfully.

Signed-off-by: Hannes Duerr <h.du...@proxmox.com>
---
changes since v1:
- fix nits and spelling

 PVE/API2/Qemu.pm | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 69c5896..1ff5abe 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -48,6 +48,7 @@ use PVE::DataCenterConfig;
 use PVE::SSHInfo;
 use PVE::Replication;
 use PVE::StorageTunnel;
+use PVE::RESTEnvironment qw(log_warn);
 
 BEGIN {
     if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3820,7 +3821,11 @@ __PACKAGE__->register_method({
 
                if ($target) {
                    # always deactivate volumes - avoid lvm LVs to be active on 
several nodes
-                   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+                   eval {
+                       PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+                   };
+                   log_warn($@) if ($@);
+
                    PVE::Storage::deactivate_volumes($storecfg, $newvollist);
 
                    my $newconffile = PVE::QemuConfig->config_file($newid, 
$target);
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to