Re: [pve-devel] [PATCH proxmox-backup-qemu 2/2] invalidate bitmap when crypto key changes

2020-10-22 Thread Fabian Grünbichler
On October 21, 2020 5:17 pm, Stefan Reiter wrote: > On 10/21/20 1:49 PM, Fabian Grünbichler wrote: >> by computing and remembering the ID digest of a static string, we can >> detect when the passed in key has changed without keeping a copy of it >> around inbetween backup jobs. >> >> this is a fol

[pve-devel] [PATCH v2 manager 7/8] simplify get_included_vmids function

2020-10-22 Thread Fabian Ebner
by collecting all the guest IDs first. Signed-off-by: Fabian Ebner --- PVE/API2/BackupInfo.pm | 18 +++--- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/PVE/API2/BackupInfo.pm b/PVE/API2/BackupInfo.pm index 909a5de1..4c461e59 100644 --- a/PVE/API2/BackupInfo.pm +++ b

[pve-devel] [PATCH v2 manager 4/8] backup: include IDs for non-existent guests

2020-10-22 Thread Fabian Ebner
Like this, there will be a backup task (within the big worker task) for such IDs, which will then visibly (i.e. also visible in the notification mail) fail with, e.g.: unable to find VM '123' In get_included_guests, the key '' was chosen for the orphaned IDs, because it cannot possibly denote a no

[pve-devel] [PATCH v2 manager 1/8] remove unused variable

2020-10-22 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/VZDump.pm | 1 - 1 file changed, 1 deletion(-) diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm index 542228d6..ee4e68b5 100644 --- a/PVE/VZDump.pm +++ b/PVE/VZDump.pm @@ -1192,7 +1192,6 @@ sub stop_running_backups { sub get_included_guests { my ($job) = @_;

[pve-devel] [PATCH-SERIES v2 manager] Make backup with IDs for non-existent guests visibly fail

2020-10-22 Thread Fabian Ebner
#1 and #2 are just cleanups #3 and #4 make the necessary changes for the improved behavior by ensuring that exec_backup_task will cleanly fail when there is no plugin specified, and then including the orphaned IDs without assigning them a plugin. This is closer to the behavior of PVE 6.0 and ensur

[pve-devel] [PATCH v2 manager 2/8] remove outdated comment

2020-10-22 Thread Fabian Ebner
Commit be30864709752195926f0a06c8f0d4d11c3c3302 moved the all/exclude logic into the single method Signed-off-by: Fabian Ebner --- test/vzdump_guest_included_test.pl | 4 1 file changed, 4 deletions(-) diff --git a/test/vzdump_guest_included_test.pl b/test/vzdump_guest_included_test.pl in

[pve-devel] [PATCH v2 manager 6/8] sort the skip list numerically

2020-10-22 Thread Fabian Ebner
The skip list was not always sorted if there were external IDs for multiple external nodes. Signed-off-by: Fabian Ebner --- PVE/VZDump.pm | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm index e1c26b42..2f31c534 100644 --- a/PVE/VZDump.pm +

[pve-devel] [RFC/PATCH v2 manager 8/8] don't group by node in get_included_guests

2020-10-22 Thread Fabian Ebner
Seems to simplify the handling for two of the callers and not complicate it for the remaining one. Including the type as well avoids the need to use the vmlist again in the included_volumes API call. In get_included_volumes, returning {} in the else branch was made explicit. Signed-off-by: Fabia

[pve-devel] [PATCH v2 manager 3/8] only use plugin after truthiness check

2020-10-22 Thread Fabian Ebner
Commit 62fc2aa9fa2eb82596f98aa014d3b0ccfc0ec542 introduced a usage of plugin before the truthiness check for plugin. At the moment it might not be possible to trigger this anymore, because of the guest inclusion rework that happened later on. But to make tasks for inexistent guest IDs visibly fail

[pve-devel] [PATCH v2 manager 5/8] order guest IDs numerically in exec_backup

2020-10-22 Thread Fabian Ebner
The assumption that they already are sorted is no longer valid, because of the IDs for non-existent guests. Signed-off-by: Fabian Ebner --- Should also be more future-proof to do it locally. This could be squashed into either the previous or the following patch. PVE/VZDump.pm | 3 ++- 1 file

[pve-devel] [PATCH qemu 2/2] PVE: Don't call job_cancel in coroutines

2020-10-22 Thread Stefan Reiter
...because it hangs on cancelling other jobs in the txn if you do. Signed-off-by: Stefan Reiter --- pve-backup.c | 26 +- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/pve-backup.c b/pve-backup.c index 9179754dcb..af2db0d4b9 100644 --- a/pve-backup.c +++ b

[pve-devel] [PATCH qemu 1/2] PVE: Don't expect complete_cb to be called outside coroutine

2020-10-22 Thread Stefan Reiter
We're at the mercy of the rest of QEMU here, and it sometimes decides to call pvebackup_complete_cb from a coroutine. This really doesn't matter to us, so don't assert and crash on it. Signed-off-by: Stefan Reiter --- pve-backup.c | 7 +++ 1 file changed, 3 insertions(+), 4 deletions(-) dif

[pve-devel] [PATCH 0/2] QEMU backup cancellation fixes

2020-10-22 Thread Stefan Reiter
Two smaller bugfixes for qmp_backup_cancel, that would lead to VM hangs or wrongly aborted backups. Sent as seperate patches to highlight the changes, but can probably be squashed into some of our other patches as well (lmk if I should do that). I also got dirty bitmap migrate working, but still n

Re: [pve-devel] [PATCH 0/2] QEMU backup cancellation fixes

2020-10-22 Thread Dominik Csapak
no code review, as i am not very qemu-coroutine savvy but i tested it and it solves my original problem short summary of it: starting a backup that runs into a timeout and then trying to cancel it resulted in a hanging qemu process and open backup task (on pbs side) that finished only when killi

Re: [pve-devel] [PATCH] disk management: Add support for additional Crucial SSDs

2020-10-22 Thread Dominik Csapak
Hi, sorry for the late answer and thanks for your contribution :) first, if you want to contribute please sign the harmony cla and send it to us (https://pve.proxmox.com/wiki/Developer_Documentation for details) secondly, generally we do not want to start an exhaustive list of vendor/models,

[pve-devel] Bug 2350: storage_migrate does not work if zfs feature@encryption=enabled

2020-10-22 Thread Andreas Palm
Hi! About a month ago I postet a comment on bug #2350 but did not get any response yet. I would really like to help here as this problem is affecting us on a daily basis. Can someone look into this and share his/her thoughts? Kind regards Andreas ___

[pve-devel] [PATCH] PVE: fix and clean up error handling for create_backup_jobs

2020-10-22 Thread Stefan Reiter
No more weird bool returns, just the standard "errp" format used everywhere else too. With this, if backup_job_create fails, the error message is actually returned over QMP and can be shown to the user. Also add a job_cancel_sync before job_unref, since a job must be in STATUS_NULL to be deleted b

[pve-devel] [PATCH qemu 1/4] migration/block-dirty-bitmap: fix larger granularity bitmaps

2020-10-22 Thread Stefan Reiter
sectors_per_chunk is a 64 bit integer, but the calculation would be done in 32 bits, leading to an overflow for coarse bitmap granularities. If that results in the value 0, it leads to a hang where no progress is made but send_bitmap_bits is constantly called with nr_sectors being 0. Reviewed-by:

[pve-devel] [PATCH 0/4] Keep dirty-bitmaps for PBS during migration

2020-10-22 Thread Stefan Reiter
Allow dirty bitmaps to persist over a live migration, allowing incremental backups even after a VM has been moved to another node. Migrating the dirty-bitmaps themselves is supported natively by QEMU, only requiring a fix for a bug leading to hangs when migrating bitmaps with a granularity as low

[pve-devel] [PATCH qemu-server 4/4] migrate: enable dirty-bitmap migration

2020-10-22 Thread Stefan Reiter
We query QEMU if it's safe before enabling it, as on versions without the necessary patches it not only would be useless, but can actually lead to hangs. PBS state is always migrated, as it's a small amount of data anyway, so we don't need to set a specific flag for it. Signed-off-by: Stefan Reit

[pve-devel] [PATCH qemu 2/4] PVE: Migrate dirty bitmap state via savevm

2020-10-22 Thread Stefan Reiter
QEMU provides 'savevm' registrations as a mechanism for arbitrary state to be migrated along with a VM. Use this to send a serialized version of dirty bitmap state data from proxmox-backup-qemu, and restore it on the target node. Also add a flag to query-proxmox-support so qemu-server can determin

[pve-devel] [PATCH proxmox-backup-qemu 3/4] add state serializing and loading functions

2020-10-22 Thread Stefan Reiter
For dirty-bitmap migration, QEMU also needs to move the static state of the library to the target. proxmox_{import,export}_state provide a means of accessing said data in a serialized fashion. QEMU treats the state as some unknown quantity of bytes and the result does not need to be human-readable

[pve-devel] applied: [PATCH manager] ui: boot order: handle cloudinit correctly

2020-10-22 Thread Thomas Lamprecht
heuristically.. Signed-off-by: Thomas Lamprecht --- and it's biting us once again to handle the cloudinit temp build like a cdrom drive... www/manager6/qemu/BootOrderEdit.js | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/www/manager6/qemu/BootOrderEdit.js b/www/mana

[pve-devel] applied-series: [PATCH manager v2 1/5] ceph: split out pool set into own method

2020-10-22 Thread Thomas Lamprecht
On 19.10.20 12:39, Alwin Antreich wrote: > to reduce code duplication and make it easier to add more options for > pool commands. > > Use a new rados object for each 'osd pool set', as each command can set > an option independent of the previous commands success/failure. On > failure a new rados o

[pve-devel] applied-partially: [PATCH-SERIES v2 manager] Make backup with IDs for non-existent guests visibly fail

2020-10-22 Thread Thomas Lamprecht
On 22.10.20 12:30, Fabian Ebner wrote: > #1 and #2 are just cleanups > > #3 and #4 make the necessary changes for the improved behavior > by ensuring that exec_backup_task will cleanly fail when there > is no plugin specified, and then including the orphaned IDs > without assigning them a plugin.

[pve-devel] applied: [PATCH manager] ui: Fix #2827: Add verify SSL cert checkbox for ldap

2020-10-22 Thread Thomas Lamprecht
On 15.10.20 12:00, Dominic Jäger wrote: > Because the option is too important to be hidden in CLI. > > Signed-off-by: Dominic Jäger > --- > I haven't managed to test this against an LDAP server yet, but the GUI > elements > go on and off as I had it in mind and the options in /etc/pve/domains.cf

[pve-devel] applied: [PATCH manager] partially fix #3056: namespace vzdump tmpdir with vmid

2020-10-22 Thread Thomas Lamprecht
On 19.10.20 16:15, Dominik Csapak wrote: > this fixes an issue where a rogue running backup would upload the vm > config of a later backup in a backup job > > instead now that directory gets deleted and the config is not > available anymore > > we cannot really keep those directories around until

[pve-devel] applied: [PATCH manager] ceph: gui: add device class select on OSD create

2020-10-22 Thread Thomas Lamprecht
On 15.10.20 10:12, Alwin Antreich wrote: > Signed-off-by: Alwin Antreich > --- > www/manager6/ceph/OSD.js | 17 + > 1 file changed, 17 insertions(+) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://

[pve-devel] applied: SPAM: [PATCH docs] pveum: Add information about realm certificates

2020-10-22 Thread Thomas Lamprecht
On 15.10.20 12:00, Dominic Jäger wrote: > As explained by Dominik and Fabian [0]. > > [0] https://bugzilla.proxmox.com/show_bug.cgi?id=2827 > > Signed-off-by: Dominic Jäger > --- > pveum.adoc | 5 + > 1 file changed, 5 insertions(+) > > applied, thanks! I dropped some trailing white space

[pve-devel] applied: [PATCH installer] fix #3057: remove ext3 option from installer

2020-10-22 Thread Thomas Lamprecht
On 05.10.20 14:13, Oguz Bektas wrote: > we can safely remove this from the fs options > > nobody uses this anymore, and it just ends up causing problems like in > [0] > > [0]: https://forum.proxmox.com/threads/emlink-too-many-links.73108/ > > Signed-off-by: Oguz Bektas > --- > proxinstall | 10

Re: [pve-devel] [PATCH pve-manager] pvestatd: stream host pressure counters

2020-10-22 Thread Alexandre Derumier
Hi Dietmar, I would like also to add some improvement for vm memory/cpu stats. for cpu, currently, we only monitor the qemu process cpu usage, but with virtio-net + vhost-net, we are missing vhost-* process cpu usage. (For vms will a lot of traffic, this is really signifiant). I would like to a