Using the information from the replication state alone should be more correct.
E.g. the configuration might contain a new, not yet replicated volume when the
full removal happens, causing unneeded scanning on the target node.

Signed-off-by: Fabian Ebner <f.eb...@proxmox.com>
---

Could be squashed with the previous patch.

There could be an edge case where the information from
the config might be useful: namely if the replication state is
missing/corrupt and full removal happens immediately without
normal replication happening in between. But IMHO it's not worth
keeping the extra code just for that...

 PVE/Replication.pm | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index 15845bb..60cfc67 100644
--- a/PVE/Replication.pm
+++ b/PVE/Replication.pm
@@ -243,15 +243,8 @@ sub replicate {
        $logfunc->("start job removal - mode '${remove_job}'");
 
        if ($remove_job eq 'full' && $jobcfg->{target} ne $local_node) {
-           # remove all remote volumes
-           my @store_list = map { (PVE::Storage::parse_volume_id($_))[0] } 
@$sorted_volids;
-           push @store_list, @{$state->{storeid_list}};
-
-           my %hash = map { $_ => 1 } @store_list;
-
            my $ssh_info = PVE::SSHInfo::get_ssh_info($jobcfg->{target});
-           remote_prepare_local_job($ssh_info, $jobid, $vmid, [], [ keys %hash 
], 1, undef, 1, $logfunc);
-
+           remote_prepare_local_job($ssh_info, $jobid, $vmid, [], 
$state->{storeid_list}, 1, undef, 1, $logfunc);
        }
        # remove all local replication snapshots (lastsync => 0)
        prepare($storecfg, $sorted_volids, $jobid, 1, undef, $logfunc);
-- 
2.20.1



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to