Re: [pve-devel] [PATCH manager] get_included_guests: handle non-existing guests

2020-10-19 Thread Fabian Ebner
On 10/19/20 2:38 PM, Thomas Lamprecht wrote: (FYI: forgot to hit reply-all, so resending this for the list) On 19.10.20 12:53, Fabian Ebner wrote: If a guest is removed without purge, the ID will remain in the backup configuration. Avoid using the variable $node when it is potentially undefined

Re: [pve-devel] SPAM: [PATCH v2 container 1/1] Fix numbering scheme detection for CentOS Stream releases.

2020-10-19 Thread Achim Dreyer (proxmox)
Hi, On a fully patched CentOS 8 stream box I get only the major release number. The whole purpose of the stream system is that there are no minor versions and all packages are directly going into the major version for continuous delivery.. root@ansible:~# rpm -ql centos-stream-release /etc/cent

[pve-devel] [PATCH manager] partially fix #3056: namespace vzdump tmpdir with vmid

2020-10-19 Thread Dominik Csapak
this fixes an issue where a rogue running backup would upload the vm config of a later backup in a backup job instead now that directory gets deleted and the config is not available anymore we cannot really keep those directories around until the end of the backup job, since we temporarily save c

[pve-devel] [PATCH qemu-server] partially fix #3056: try to cancel backup without uuid

2020-10-19 Thread Dominik Csapak
if the 'backup' qmp call itself times out or fails, we still want to try to cancel the backup, else it can happen that there is still a backup running even when vzdump thinks it was canceled qapi docs says that backup cancel always returns success, even if no backup is running since we hold a glo

Re: [pve-devel] [PATCH manager] get_included_guests: handle non-existing guests

2020-10-19 Thread Thomas Lamprecht
(FYI: forgot to hit reply-all, so resending this for the list) On 19.10.20 12:53, Fabian Ebner wrote: > If a guest is removed without purge, the ID will remain > in the backup configuration. Avoid using the variable $node > when it is potentially undefined. Instead, skip non-existing > guests and

[pve-devel] [PATCH manager 3/3] fix #2745: backup GUI: allow users to specify remove=1

2020-10-19 Thread Fabian Ebner
A user with Datastore.AllocateSpace, VM.Audit, VM.Backup privileges can already remove backups from the GUI manually, so it shouldn't be a problem if they can set the remove flag when starting a manual vzdump job in the GUI. Signed-off-by: Fabian Ebner --- www/manager6/window/Backup.js | 17

[pve-devel] [PATCH manager 2/3] partially fix #2745: use default for vzdump remove parameter

2020-10-19 Thread Fabian Ebner
The initial default from the $confdesc is 1 anyways, and like this, changing the default in /etc/vzdump.conf to 0 actually works. Signed-off-by: Fabian Ebner --- PVE/VZDump.pm | 2 -- 1 file changed, 2 deletions(-) diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm index 542228d6..3ccb8269 100644 --- a

[pve-devel] [PATCH guest-common 1/3] mention prune behavior for the vzdump remove parameter

2020-10-19 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/VZDump/Common.pm | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm index 63a4689..6ae35e6 100644 --- a/PVE/VZDump/Common.pm +++ b/PVE/VZDump/Common.pm @@ -221,7 +221,8 @@ my $confdesc = { }),

Re: [pve-devel] SPAM: [PATCH v2 container 1/1] Fix numbering scheme detection for CentOS Stream releases.

2020-10-19 Thread Thomas Lamprecht
On 17.10.20 15:45, Achim Dreyer wrote: > Signed-off-by: Achim Dreyer > --- > src/PVE/LXC/Setup/CentOS.pm | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm > index 0825273..77eb6f7 100644 > --- a/src/PVE/LXC/Setup/Ce

[pve-devel] [PATCH v2 qemu-server 5/7] config_to_command: use -no-shutdown option

2020-10-19 Thread Stefan Reiter
Ignore shutdowns triggered from within the guest in favor of detecting them via qmeventd and stopping the QEMU process that way. Signed-off-by: Stefan Reiter --- v2: * as part of rebase: include newer tests (bootorder) in update PVE/QemuServer.pm | 2 ++ tes

[pve-devel] [PATCH v2 qemu-server 2/7] qmeventd: add last-ditch effort SIGKILL cleanup

2020-10-19 Thread Stefan Reiter
'alarm' is used to schedule an additionaly cleanup round 5 seconds after sending SIGTERM via terminate_client. This then sends SIGKILL via a pidfd (if supported by the kernel) or directly via kill, making sure that the QEMU process is *really* dead and won't be left behind in an undetermined state.

[pve-devel] [PATCH v2 qemu-server 1/7] qmeventd: add handling for -no-shutdown QEMU instances

2020-10-19 Thread Stefan Reiter
We take care of killing QEMU processes when a guest shuts down manually. QEMU will not exit itself, if started with -no-shutdown, but it will still emit a "SHUTDOWN" event, which we await and then send SIGTERM. This additionally allows us to handle backups in such situations. A vzdump instance wil

[pve-devel] [PATCH v2 manager 7/7] ui: qemu: set correct disabled state for start button

2020-10-19 Thread Stefan Reiter
If a guest's QEMU process is 'running', but QMP says 'shutdown' or 'prelaunch', the VM is ready to be booted anew, so we can show the button. The 'shutdown' button is intentionally not touched, as we always want to give the user the ability to 'stop' a VM (and thus kill any potentially leftover pr

[pve-devel] [PATCH v2 0/7] Handle guest shutdown during backups

2020-10-19 Thread Stefan Reiter
Use QEMU's -no-shutdown argument so the QEMU instance stays alive even if the guest shuts down. This allows running backups to continue. To handle cleanup of QEMU processes, this series extends the qmeventd to handle SHUTDOWN events not just for detecting guest triggered shutdowns, but also to cle

[pve-devel] [PATCH v2 qemu-server 6/7] fix vm_resume and allow vm_start with QMP status 'shutdown'

2020-10-19 Thread Stefan Reiter
When the VM is in status 'shutdown', i.e. after the guest issues a powerdown while a backup is running, QEMU requires a 'system_reset' to be issued before 'cont' can boot the guest again. Additionally, when the VM has been powered down during a backup, the logically correct call would be a 'vm_sta

[pve-devel] [PATCH v2 qemu-server 4/7] vzdump: use dirty bitmap for not running VMs too

2020-10-19 Thread Stefan Reiter
Now that VMs can be started during a backup, it makes sense to create a dirty bitmap in these cases too, since the VM might be resumed and thus continue running normally even after the backup is done. Signed-off-by: Stefan Reiter --- PVE/VZDump/QemuServer.pm | 5 ++--- 1 file changed, 2 insertio

[pve-devel] [PATCH v2 qemu-server 3/7] vzdump: connect to qmeventd for duration of backup

2020-10-19 Thread Stefan Reiter
Connect and send the vmid of the VM being backed up. This prevents qmeventd from SIGTERMing the underlying QEMU instance, even if the guest shuts itself down, until we close the socket connection (in cleanup, which happens on success and abort, or if we crash the file handle will be closed as well)

Re: [pve-devel] [PATCH qemu-server] copy conntrack information on migration

2020-10-19 Thread Alexandre Derumier
>>There are no filters implemented yet, and there does not seem to be a >>way to filter by interface. So if we want to limit the conntracks to >>certain VMs, we could use zones and add a filter for them. >>We would have to map them somehow though as the zone parameter is only >>16 bits and VMIDs mi

[pve-devel] [PATCH manager] get_included_guests: handle non-existing guests

2020-10-19 Thread Fabian Ebner
If a guest is removed without purge, the ID will remain in the backup configuration. Avoid using the variable $node when it is potentially undefined. Instead, skip non-existing guests and warn the user. Reported here: https://forum.proxmox.com/threads/purge-backup-does-not-remove-vm-from-datacente

[pve-devel] [PATCH manager v2 2/5] ceph: allow to alter pool settings

2020-10-19 Thread Alwin Antreich
after creation, so that users don't need to go the ceph tooling route. Separate common pool options to reuse them in other places. Signed-off-by: Alwin Antreich --- PVE/API2/Ceph.pm | 98 ++ PVE/CLI/pveceph.pm | 1 + 2 files changed, 99 insertions(+

[pve-devel] [PATCH manager v2 1/5] ceph: split out pool set into own method

2020-10-19 Thread Alwin Antreich
to reduce code duplication and make it easier to add more options for pool commands. Use a new rados object for each 'osd pool set', as each command can set an option independent of the previous commands success/failure. On failure a new rados object would need to be created and that will confuse

[pve-devel] [PATCH manager v2 4/5] ceph: add pg_autoscale_mode to pool create

2020-10-19 Thread Alwin Antreich
Signed-off-by: Alwin Antreich --- PVE/API2/Ceph.pm | 8 1 file changed, 8 insertions(+) diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm index 0aeb5075..69fe3d6d 100644 --- a/PVE/API2/Ceph.pm +++ b/PVE/API2/Ceph.pm @@ -718,6 +718,13 @@ my $ceph_pool_common_options = sub { en

[pve-devel] [PATCH manager v2 3/5] ceph: use pool common options pool create

2020-10-19 Thread Alwin Antreich
to keep the pool create & set in sync. Signed-off-by: Alwin Antreich --- PVE/API2/Ceph.pm | 40 +--- 1 file changed, 1 insertion(+), 39 deletions(-) diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm index 7cdbdccd..0aeb5075 100644 --- a/PVE/API2/Ceph.pm +++ b/

[pve-devel] [PATCH manager v2 5/5] ceph: gui: add autoscale mode to pool create

2020-10-19 Thread Alwin Antreich
Signed-off-by: Alwin Antreich --- www/manager6/ceph/Pool.js | 13 + 1 file changed, 13 insertions(+) diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js index 19eb01e9..11bcf9d5 100644 --- a/www/manager6/ceph/Pool.js +++ b/www/manager6/ceph/Pool.js @@ -39,6 +39,19 @@ E

Re: [pve-devel] [PATCH qemu-server] copy conntrack information on migration

2020-10-19 Thread Mira Limbeck
I haven't done any performance tests yet. But currently we query all conntracks (same as conntrack -L), print them one by one as JSON to STDOUT. When importing we do it line-by-line, which means one conntrack at a time. But if necessary we could batch them, as mentioned in the bugtracker, by usi