On October 21, 2020 5:17 pm, Stefan Reiter wrote:
> On 10/21/20 1:49 PM, Fabian Grünbichler wrote:
>> by computing and remembering the ID digest of a static string, we can
>> detect when the passed in key has changed without keeping a copy of it
>> around inbetween backup jobs.
>>
>> this is a fol
by collecting all the guest IDs first.
Signed-off-by: Fabian Ebner
---
PVE/API2/BackupInfo.pm | 18 +++---
1 file changed, 3 insertions(+), 15 deletions(-)
diff --git a/PVE/API2/BackupInfo.pm b/PVE/API2/BackupInfo.pm
index 909a5de1..4c461e59 100644
--- a/PVE/API2/BackupInfo.pm
+++ b
Like this, there will be a backup task (within the big worker task)
for such IDs, which will then visibly (i.e. also visible in the
notification mail) fail with, e.g.:
unable to find VM '123'
In get_included_guests, the key '' was chosen for the orphaned IDs,
because it cannot possibly denote a no
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 1 -
1 file changed, 1 deletion(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 542228d6..ee4e68b5 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -1192,7 +1192,6 @@ sub stop_running_backups {
sub get_included_guests {
my ($job) = @_;
#1 and #2 are just cleanups
#3 and #4 make the necessary changes for the improved behavior
by ensuring that exec_backup_task will cleanly fail when there
is no plugin specified, and then including the orphaned IDs
without assigning them a plugin. This is closer to the behavior
of PVE 6.0 and ensur
Commit be30864709752195926f0a06c8f0d4d11c3c3302 moved the
all/exclude logic into the single method
Signed-off-by: Fabian Ebner
---
test/vzdump_guest_included_test.pl | 4
1 file changed, 4 deletions(-)
diff --git a/test/vzdump_guest_included_test.pl
b/test/vzdump_guest_included_test.pl
in
The skip list was not always sorted if there were external IDs for multiple
external nodes.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index e1c26b42..2f31c534 100644
--- a/PVE/VZDump.pm
+
Seems to simplify the handling for two of the callers and not complicate it for
the remaining one.
Including the type as well avoids the need to use the vmlist again in the
included_volumes API call.
In get_included_volumes, returning {} in the else branch was made explicit.
Signed-off-by: Fabia
Commit 62fc2aa9fa2eb82596f98aa014d3b0ccfc0ec542 introduced
a usage of plugin before the truthiness check for plugin.
At the moment it might not be possible to trigger this anymore,
because of the guest inclusion rework that happened later on.
But to make tasks for inexistent guest IDs visibly fail
The assumption that they already are sorted is no longer valid,
because of the IDs for non-existent guests.
Signed-off-by: Fabian Ebner
---
Should also be more future-proof to do it locally.
This could be squashed into either the previous or
the following patch.
PVE/VZDump.pm | 3 ++-
1 file
...because it hangs on cancelling other jobs in the txn if you do.
Signed-off-by: Stefan Reiter
---
pve-backup.c | 26 +-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/pve-backup.c b/pve-backup.c
index 9179754dcb..af2db0d4b9 100644
--- a/pve-backup.c
+++ b
We're at the mercy of the rest of QEMU here, and it sometimes decides to
call pvebackup_complete_cb from a coroutine. This really doesn't matter
to us, so don't assert and crash on it.
Signed-off-by: Stefan Reiter
---
pve-backup.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
dif
Two smaller bugfixes for qmp_backup_cancel, that would lead to VM hangs or
wrongly aborted backups. Sent as seperate patches to highlight the changes, but
can probably be squashed into some of our other patches as well (lmk if I should
do that).
I also got dirty bitmap migrate working, but still n
no code review, as i am not very qemu-coroutine savvy
but i tested it and it solves my original problem
short summary of it:
starting a backup that runs into a timeout and then trying to cancel
it resulted in a hanging qemu process and open backup task
(on pbs side) that finished only when killi
Hi,
sorry for the late answer and thanks for your contribution :)
first, if you want to contribute please sign the harmony cla and send it
to us (https://pve.proxmox.com/wiki/Developer_Documentation for details)
secondly, generally we do not want to start an exhaustive list of
vendor/models,
Hi!
About a month ago I postet a comment on bug #2350 but did not get any
response yet. I would really like to help here as this problem is
affecting us on a daily basis.
Can someone look into this and share his/her thoughts?
Kind regards
Andreas
___
No more weird bool returns, just the standard "errp" format used
everywhere else too. With this, if backup_job_create fails, the error
message is actually returned over QMP and can be shown to the user.
Also add a job_cancel_sync before job_unref, since a job must be in
STATUS_NULL to be deleted b
sectors_per_chunk is a 64 bit integer, but the calculation would be done
in 32 bits, leading to an overflow for coarse bitmap granularities.
If that results in the value 0, it leads to a hang where no progress is
made but send_bitmap_bits is constantly called with nr_sectors being 0.
Reviewed-by:
Allow dirty bitmaps to persist over a live migration, allowing incremental
backups even after a VM has been moved to another node.
Migrating the dirty-bitmaps themselves is supported natively by QEMU, only
requiring a fix for a bug leading to hangs when migrating bitmaps with a
granularity as low
We query QEMU if it's safe before enabling it, as on versions without
the necessary patches it not only would be useless, but can actually
lead to hangs.
PBS state is always migrated, as it's a small amount of data anyway, so
we don't need to set a specific flag for it.
Signed-off-by: Stefan Reit
QEMU provides 'savevm' registrations as a mechanism for arbitrary state
to be migrated along with a VM. Use this to send a serialized version of
dirty bitmap state data from proxmox-backup-qemu, and restore it on the
target node.
Also add a flag to query-proxmox-support so qemu-server can determin
For dirty-bitmap migration, QEMU also needs to move the static state of
the library to the target. proxmox_{import,export}_state provide a means
of accessing said data in a serialized fashion.
QEMU treats the state as some unknown quantity of bytes and the result
does not need to be human-readable
heuristically..
Signed-off-by: Thomas Lamprecht
---
and it's biting us once again to handle the cloudinit temp build like a cdrom
drive...
www/manager6/qemu/BootOrderEdit.js | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/www/manager6/qemu/BootOrderEdit.js
b/www/mana
On 19.10.20 12:39, Alwin Antreich wrote:
> to reduce code duplication and make it easier to add more options for
> pool commands.
>
> Use a new rados object for each 'osd pool set', as each command can set
> an option independent of the previous commands success/failure. On
> failure a new rados o
On 22.10.20 12:30, Fabian Ebner wrote:
> #1 and #2 are just cleanups
>
> #3 and #4 make the necessary changes for the improved behavior
> by ensuring that exec_backup_task will cleanly fail when there
> is no plugin specified, and then including the orphaned IDs
> without assigning them a plugin.
On 15.10.20 12:00, Dominic Jäger wrote:
> Because the option is too important to be hidden in CLI.
>
> Signed-off-by: Dominic Jäger
> ---
> I haven't managed to test this against an LDAP server yet, but the GUI
> elements
> go on and off as I had it in mind and the options in /etc/pve/domains.cf
On 19.10.20 16:15, Dominik Csapak wrote:
> this fixes an issue where a rogue running backup would upload the vm
> config of a later backup in a backup job
>
> instead now that directory gets deleted and the config is not
> available anymore
>
> we cannot really keep those directories around until
On 15.10.20 10:12, Alwin Antreich wrote:
> Signed-off-by: Alwin Antreich
> ---
> www/manager6/ceph/OSD.js | 17 +
> 1 file changed, 17 insertions(+)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://
On 15.10.20 12:00, Dominic Jäger wrote:
> As explained by Dominik and Fabian [0].
>
> [0] https://bugzilla.proxmox.com/show_bug.cgi?id=2827
>
> Signed-off-by: Dominic Jäger
> ---
> pveum.adoc | 5 +
> 1 file changed, 5 insertions(+)
>
>
applied, thanks! I dropped some trailing white space
On 05.10.20 14:13, Oguz Bektas wrote:
> we can safely remove this from the fs options
>
> nobody uses this anymore, and it just ends up causing problems like in
> [0]
>
> [0]: https://forum.proxmox.com/threads/emlink-too-many-links.73108/
>
> Signed-off-by: Oguz Bektas
> ---
> proxinstall | 10
Hi Dietmar,
I would like also to add some improvement for vm memory/cpu stats.
for cpu, currently, we only monitor the qemu process cpu usage, but
with virtio-net + vhost-net,
we are missing vhost-* process cpu usage. (For vms will a lot of
traffic, this is really signifiant).
I would like to a
31 matches
Mail list logo