On Tue, May 08, 2018 at 08:29:56AM +0200, Wolfgang Link wrote:
> If a VM configuration has been manually moved or recovered by HA,
> there is no status on this new node.
> In this case, the replication snapshots still exist on the remote side.
> It must be possible to remove a job without status,
> otherwise, a new replication job on the same remote node will fail
> and the disks will have to be manually removed.
> When searching through the sorted_volumes generated from the VMID.conf,
> we can be sure that every disk will be removed in the event
> of a complete job removal on the remote side.
> 
> In the end, the remote_prepare_local_job calls on the remote side a prepare.
> ---
>  PVE/Replication.pm | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/Replication.pm b/PVE/Replication.pm
> index ce267fe..89a9572 100644
> --- a/PVE/Replication.pm
> +++ b/PVE/Replication.pm
> @@ -214,8 +214,10 @@ sub replicate {
>  
>       if ($remove_job eq 'full' && $jobcfg->{target} ne $local_node) {
>           # remove all remote volumes
> +         my $store_list = [ map { (PVE::Storage::parse_volume_id($_))[0] } 
> @$sorted_volids ];

Shouldn't we deduplicate entries here?

> +
>           my $ssh_info = PVE::Cluster::get_ssh_info($jobcfg->{target});
> -         remote_prepare_local_job($ssh_info, $jobid, $vmid, [], 
> $state->{storeid_list}, 0, undef, 1, $logfunc);
> +         remote_prepare_local_job($ssh_info, $jobid, $vmid, [], $store_list, 
> 0, undef, 1, $logfunc);
>  
>       }
>       # remove all local replication snapshots (lastsync => 0)
> -- 
> 2.11.0

_______________________________________________
pve-devel mailing list
[email protected]
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to