The first call to job_cancel_sync() will cancel and free all jobs in the transaction, so ensure that it's called only once and get rid of the job_unref() that would operate on freed memory.
It's also necessary to NULL backup_state.pbs in the error scenario, because a subsequent backup_cancel QMP call (as happens in PVE when the backup QMP command fails) would try to call proxmox_backup_abort() and run into a segfault. Signed-off-by: Fabian Ebner <f.eb...@proxmox.com> --- All patches are intended to be ordered after 0038-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch or could also be squashed into that (while lifting the commit message to the main repo). Of course I can also send that directly if this is ACKed. Would be glad if someone could confirm that the cleanup for PBS is correct like this. I didn't have the time to look into all the details there yet. New in v2. pve-backup.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/pve-backup.c b/pve-backup.c index 6f05796fad..dfaf4c93f8 100644 --- a/pve-backup.c +++ b/pve-backup.c @@ -509,6 +509,11 @@ static void create_backup_jobs_bh(void *opaque) { } if (*errp) { + /* + * It's enough to cancel one job in the transaction, the rest will + * follow automatically. + */ + bool canceled = false; l = backup_state.di_list; while (l) { PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data; @@ -519,12 +524,12 @@ static void create_backup_jobs_bh(void *opaque) { di->target = NULL; } - if (di->job) { + if (!canceled && di->job) { AioContext *ctx = di->job->job.aio_context; aio_context_acquire(ctx); job_cancel_sync(&di->job->job, true); - job_unref(&di->job->job); aio_context_release(ctx); + canceled = true; } } } @@ -974,6 +979,7 @@ err: if (pbs) { proxmox_backup_disconnect(pbs); + backup_state.pbs = NULL; } if (backup_dir) { -- 2.30.2 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel