Re: [pve-devel] [PATCH qemu-server] fix 3674: QEMU restore: verify storage allows images before writing

2022-02-23 Thread Fabian Ebner
Am 17.02.22 um 15:12 schrieb Matthias Heiserer:
> When restoring a backup and the storage the disks would be created on
> doesn't allow 'images', the process errors without cleanup.
> This is the same behaviour we currently have when the storage is
> disabled.
> 
> Signed-off-by: Matthias Heiserer 
> ---
>  PVE/QemuServer.pm | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index a99f1a5..2a1ec48 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -6299,6 +6299,10 @@ my $restore_allocate_devices = sub {
>   my $supported = grep { $_ eq $d->{format} } @$validFormats;
>   $d->{format} = $defFormat if !$supported;
>  
> + # check if images can be stored on the requested storage

Nit: The comment isn't needed IMHO, because the code is pretty clear on
its own.

> + die "Content type 'images' is not available on storage '$storeid'\n"
> + if !$scfg->{content}->{images};
> +
>   my $name;
>   if ($d->{is_cloudinit}) {
>   $name = "vm-$vmid-cloudinit";

Nothing wrong with the patch (except for the bug number as you already
pointed out ;)), it's just that the permission check for accessing the
storage is currently done in parse_backup_hints(), so it might be a bit
cleaner to add the new check there too. Like that, both checks are in
one place and we can abort early, before starting to allocate any disks.

It seems that there's another small bug in parse_backup_hints(), because
there's no permission check for a cloudinit disk. Would be nice if you
could fix that too.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] fix #3733: add 20 seconds timeout when VM backup stopped

2022-02-23 Thread Moayad Almalat
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9cefcc0..c43518e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5551,7 +5551,7 @@ sub vm_start_nolock {
 };
 # Issues with the above 'stop' not being fully completed are extremely 
rare, a very low
 # timeout should be more than enough here...
-PVE::Systemd::wait_for_unit_removed("$vmid.scope", 5);
+PVE::Systemd::wait_for_unit_removed("$vmid.scope", 20);
 
 my $cpuunits = get_cpuunits($conf);
 
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server] fix 3674: QEMU restore: verify storage allows images before writing

2022-02-23 Thread Markus Frank

With this patch restoring a backup on a disabled storage results in an
error and just creates a VM with the configuration from the backup
without any virtual disk instead of forcing the creation of the disk(s).
Works as intended.

Tested-by: Markus Frank 

On 2/17/22 15:12, Matthias Heiserer wrote:

When restoring a backup and the storage the disks would be created on
doesn't allow 'images', the process errors without cleanup.
This is the same behaviour we currently have when the storage is
disabled.

Signed-off-by: Matthias Heiserer 
---
  PVE/QemuServer.pm | 4 
  1 file changed, 4 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a99f1a5..2a1ec48 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6299,6 +6299,10 @@ my $restore_allocate_devices = sub {
my $supported = grep { $_ eq $d->{format} } @$validFormats;
$d->{format} = $defFormat if !$supported;
  
+	# check if images can be stored on the requested storage

+   die "Content type 'images' is not available on storage '$storeid'\n"
+   if !$scfg->{content}->{images};
+
my $name;
if ($d->{is_cloudinit}) {
$name = "vm-$vmid-cloudinit";




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH widget-toolkit] fix drag&drop for pointerType 'pen'

2022-02-23 Thread Dominik Csapak
some devices (e.g. vms via novnc, and some laptops) get the pointerType
'pen' under chromium.

the DragZone handler tries to ignore touch input for that by
checking for "=== 'mouse'" which does not include 'pen'
so override that to handle it when the pointerType !== 'touch'

Signed-off-by: Dominik Csapak 
---
 src/Toolkit.js | 13 +
 1 file changed, 13 insertions(+)

diff --git a/src/Toolkit.js b/src/Toolkit.js
index 3d0eefc..51634d3 100644
--- a/src/Toolkit.js
+++ b/src/Toolkit.js
@@ -685,6 +685,19 @@ Ext.define('Proxmox.Component', {
 clearPropertiesOnDestroy: false,
 });
 
+// Fix drag&drop for vms and desktops that detect 'pen' pointerType
+Ext.define('Proxmox.view.DragZone', {
+override: 'Ext.view.DragZone',
+
+onItemMouseDown: function(view, record, item, index, e) {
+// Ignore touchstart.
+// For touch events, we use longpress.
+if (e.pointerType !== 'touch') {
+this.onTriggerGesture(view, record, item, index, e);
+}
+},
+});
+
 // force alert boxes to be rendered with an Error Icon
 // since Ext.Msg is an object and not a prototype, we need to override it
 // after the framework has been initiated
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v2] fix 3886: QEMU restore: verify storage allows images before writing

2022-02-23 Thread Matthias Heiserer
When restoring a backup and the storage the disks would be created on
doesn't allow 'images', the process errors without cleanup.
This is the same behaviour we currently have when the storage is
disabled.

Signed-off-by: Matthias Heiserer 
---
 PVE/QemuServer.pm | 5 +
 1 file changed, 5 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a99f1a5..aaada7a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6243,6 +6243,9 @@ my $parse_backup_hints = sub {
} elsif (!$storeid) {
$storeid = 'local';
}
+   my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+   die "Content type 'images' is not available on storage '$storeid'\n"
+   if !$scfg->{content}->{images};
$format = 'raw' if !$format;
$devinfo->{$devname}->{devname} = $devname;
$devinfo->{$devname}->{virtdev} = $virtdev;
@@ -6264,6 +6267,8 @@ my $parse_backup_hints = sub {
$storeid = $options->{storage} if defined ($options->{storage});
my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
my $format = qemu_img_format($scfg, $volname); # has 'raw' 
fallback
+   die "Content type 'images' is not available on storage 
'$storeid'\n"
+   if !$scfg->{content}->{images};
 
$virtdev_hash->{$virtdev} = {
format => $format,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH widget-toolkit] fix drag&drop for pointerType 'pen'

2022-02-23 Thread Thomas Lamprecht
On 23.02.22 12:05, Dominik Csapak wrote:
> some devices (e.g. vms via novnc, and some laptops) get the pointerType
> 'pen' under chromium.
> 
> the DragZone handler tries to ignore touch input for that by
> checking for "=== 'mouse'" which does not include 'pen'
> so override that to handle it when the pointerType !== 'touch'
> 
> Signed-off-by: Dominik Csapak 
> ---
>  src/Toolkit.js | 13 +
>  1 file changed, 13 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 1/1] fix #3424: api: snapshot delete: wait for active replication

2022-02-23 Thread Fabian Ebner
A to-be-deleted snapshot might be actively used by replication,
resulting in a not (or only partially) removed snapshot and locked
(snapshot-delete) VM. Simply wait a few seconds for any ongoing
replication.

Signed-off-by: Fabian Ebner 
---

Changes from v3:
* Use guest_migration_lock() directly.

 PVE/API2/Qemu.pm | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9be1caf..d123ece 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4552,11 +4552,19 @@ __PACKAGE__->register_method({
 
my $snapname = extract_param($param, 'snapname');
 
-   my $realcmd = sub {
+   my $do_delete = sub {
PVE::Cluster::log_msg('info', $authuser, "delete snapshot VM $vmid: 
$snapname");
PVE::QemuConfig->snapshot_delete($vmid, $snapname, $param->{force});
};
 
+   my $realcmd = sub {
+   if ($param->{force}) {
+   $do_delete->();
+   } else {
+   PVE::GuestHelpers::guest_migration_lock($vmid, 10, $do_delete);
+   }
+   };
+
return $rpcenv->fork_worker('qmdelsnapshot', $vmid, $authuser, 
$realcmd);
 }});
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 container 2/2] fix #3424: api: snapshot delete: wait for active replication

2022-02-23 Thread Fabian Ebner
A to-be-deleted snapshot might be actively used by replication,
resulting in a not (or only partially) removed snapshot and locked
(snapshot-delete) container. Simply wait a few seconds for any ongoing
replication.

Signed-off-by: Fabian Ebner 
---

Changes from v3:
* Use guest_migration_lock() directly.

 src/PVE/API2/LXC/Snapshot.pm | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC/Snapshot.pm b/src/PVE/API2/LXC/Snapshot.pm
index 160e5eb..8d0319a 100644
--- a/src/PVE/API2/LXC/Snapshot.pm
+++ b/src/PVE/API2/LXC/Snapshot.pm
@@ -10,6 +10,7 @@ use PVE::INotify;
 use PVE::Cluster qw(cfs_read_file);
 use PVE::AccessControl;
 use PVE::Firewall;
+use PVE::GuestHelpers;
 use PVE::Storage;
 use PVE::RESTHandler;
 use PVE::RPCEnvironment;
@@ -198,11 +199,19 @@ __PACKAGE__->register_method({
 
my $snapname = extract_param($param, 'snapname');
 
-   my $realcmd = sub {
+   my $do_delete = sub {
PVE::Cluster::log_msg('info', $authuser, "delete snapshot VM $vmid: 
$snapname");
PVE::LXC::Config->snapshot_delete($vmid, $snapname, 
$param->{force});
};
 
+   my $realcmd = sub {
+   if ($param->{force}) {
+   $do_delete->();
+   } else {
+   PVE::GuestHelpers::guest_migration_lock($vmid, 10, $do_delete);
+   }
+   };
+
return $rpcenv->fork_worker('vzdelsnapshot', $vmid, $authuser, 
$realcmd);
 }});
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH-SERIES v4 container/qemu-server] fix #3424: wait for active replication when deleting a snapshot

2022-02-23 Thread Fabian Ebner
Avoid that an attempt to remove a snapshot that's actively used by
replication leads to a partially (or not) removed snapshot and locked
guest.

I decided to make the checks at the call sides, because passing the
log function and timeout to snapshot_delete felt awkward as they
would only be used for obtaining the lock.

Changes from v3:
* Unconditionally take the lock, to not race with replication job
  creation and future-proofing.
* Only log in the case with the long timeout if we can't obtain
  the lock quickly.
* Make message more general, because it might be another snapshot
  removal operation holding the lock.


container:

Fabian Ebner (2):
  partially fix #3424: vzdump: cleanup: wait for active replication
  fix #3424: api: snapshot delete: wait for active replication

 src/PVE/API2/LXC/Snapshot.pm | 11 ++-
 src/PVE/VZDump/LXC.pm| 20 ++--
 2 files changed, 28 insertions(+), 3 deletions(-)


qemu-server:

Fabian Ebner (1):
  fix #3424: api: snapshot delete: wait for active replication

 PVE/API2/Qemu.pm | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 container 1/2] partially fix #3424: vzdump: cleanup: wait for active replication

2022-02-23 Thread Fabian Ebner
As replication and backup can happen at the same time, the vzdump
snapshot might be actively used by replication when backup tries
to cleanup, resulting in a not (or only partially) removed snapshot
and locked (snapshot-delete) container.

Wait up to 10 minutes for any ongoing replication. If replication
doesn't finish in time, the fact that there is no attempt to remove
the snapshot means that there's no risk for the container to end up in
a locked state. And the beginning of the next backup will force remove
the left-over snapshot, which will very likely succeed even at the
storage layer, because the replication really should be done by then
(subsequent replications shouldn't matter as they don't need to
re-transfer the vzdump snapshot).

Suggested-by: Fabian Grünbichler 
Co-developed-by: Fabian Grünbichler 
Signed-off-by: Fabian Ebner 
---

Changes from v3:
* Unconditionally take the lock, to not race with replication job
  creation and better future-proofing.
* Only log if we can't obtain the lock quickly rather than if a
  replication job is configured.
* Make message more general, because it might be another snapshot
  removal operation holding the lock.

 src/PVE/VZDump/LXC.pm | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index b7f7463..078b74d 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -8,6 +8,7 @@ use File::Path;
 use POSIX qw(strftime);
 
 use PVE::Cluster qw(cfs_read_file);
+use PVE::GuestHelpers;
 use PVE::INotify;
 use PVE::LXC::Config;
 use PVE::LXC;
@@ -476,8 +477,23 @@ sub cleanup {
 }
 
 if ($task->{cleanup}->{remove_snapshot}) {
-   $self->loginfo("cleanup temporary 'vzdump' snapshot");
-   PVE::LXC::Config->snapshot_delete($vmid, 'vzdump', 0);
+   my $lock_obtained;
+   my $do_delete = sub {
+   $lock_obtained = 1;
+   $self->loginfo("cleanup temporary 'vzdump' snapshot");
+   PVE::LXC::Config->snapshot_delete($vmid, 'vzdump', 0);
+   };
+
+   eval {
+   eval { PVE::GuestHelpers::guest_migration_lock($vmid, 1, 
$do_delete); };
+   if (my $err = $@) {
+   die $err if $lock_obtained;
+
+   $self->loginfo("waiting for active replication or other 
operation..");
+   PVE::GuestHelpers::guest_migration_lock($vmid, 600, $do_delete);
+   }
+   };
+   die "snapshot 'vzdump' was not (fully) removed - $@" if $@;
 }
 }
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server v2] fix 3886: QEMU restore: verify storage allows images before writing

2022-02-23 Thread Fabian Ebner
Am 23.02.22 um 12:15 schrieb Matthias Heiserer:
> When restoring a backup and the storage the disks would be created on
> doesn't allow 'images', the process errors without cleanup.
> This is the same behaviour we currently have when the storage is
> disabled.
> 
> Signed-off-by: Matthias Heiserer 
> ---
>  PVE/QemuServer.pm | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index a99f1a5..aaada7a 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -6243,6 +6243,9 @@ my $parse_backup_hints = sub {
>   } elsif (!$storeid) {
>   $storeid = 'local';
>   }
> + my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> + die "Content type 'images' is not available on storage '$storeid'\n"
> + if !$scfg->{content}->{images};

Style nit: could be placed closer to the permission check for
Datastore.AllocateSpace that follows below.

>   $format = 'raw' if !$format;
>   $devinfo->{$devname}->{devname} = $devname;
>   $devinfo->{$devname}->{virtdev} = $virtdev;
> @@ -6264,6 +6267,8 @@ my $parse_backup_hints = sub {
>   $storeid = $options->{storage} if defined ($options->{storage});
>   my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
>   my $format = qemu_img_format($scfg, $volname); # has 'raw' 
> fallback
> + die "Content type 'images' is not available on storage 
> '$storeid'\n"
> + if !$scfg->{content}->{images};

Here, the permission check for Datastore.AllocateSpace is missing, which
is the other existing bug. Maybe it's even worth having a small closure
doing both checks to re-use in both cases, but not really sure.

>  
>   $virtdev_hash->{$virtdev} = {
>   format => $format,


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel