while it didn't actually fail, we probably want to avoid the behavior:

With remove_job=full:
    * run_replication called during migration causes the replicated volumes to
      be removed
    * migration continues by fully copying all volumes

With remove_job=local:
    * run_replication called during migration causes the job (and local
      replication snapshots) to be removed
    * migration continues by fully copying all volumes and renaming them to
      avoid collision with the still existing remote volumes

Signed-off-by: Fabian Ebner <f.eb...@proxmox.com>
---

New in v2

Alternatively, we could throw out the remove_job property before calling
run_replication during migration, use the replicated volumes, and let
the scheduled pvesr call remove the job after migration

 PVE/QemuMigrate.pm | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 3fb2850..c6623e1 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -224,6 +224,10 @@ sub prepare {
     $self->{replication_jobcfg} = 
$repl_conf->find_local_replication_job($vmid, $self->{node});
     $self->{is_replicated} = $repl_conf->check_for_existing_jobs($vmid, 1);
 
+    if ($self->{replication_jobcfg} && 
defined($self->{replication_jobcfg}->{remove_job})) {
+       die "refusing to migrate replicated VM whose replication job is marked 
for removal\n";
+    }
+
     PVE::QemuConfig->check_lock($conf);
 
     my $running = 0;
-- 
2.20.1



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to