On 7/2/20 8:28 AM, Fabian Grünbichler wrote:
it should also be possible to keep the old bitmap (and associated backup
checksum) in this case? this is what bitmap-mode on-success is supposed
to do, but maybe errors are not triggering the right code paths?
The problem with on-success mode is that we have a backup_job per drive, 
so if one drive fails and one succeeds, one bitmap will be applied and 
the other wont, while PBS marks the whole backup as failed.
Though it is true that in case of an abort (not an error though) we 
could keep the last bitmap intact in this patch. Does 'ret' 
differentiate between abort and error?
On July 1, 2020 2:17 pm, Dietmar Maurer wrote:
Note: We remove the device from di_list, so pvebackup_co_cleanup does
not handle this case.
---
  pve-backup.c | 6 ++++++
  1 file changed, 6 insertions(+)

diff --git a/pve-backup.c b/pve-backup.c
index 61a8b4d2a4..1c4f6cf9e0 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -318,6 +318,12 @@ static void pvebackup_complete_cb(void *opaque, int ret)
      // remove self from job queue
      backup_state.di_list = g_list_remove(backup_state.di_list, di);
+ if (di->bitmap && ret < 0) {
+        // on error or cancel we cannot ensure synchronization of dirty
+        // bitmaps with backup server, so remove all and do full backup next
+        bdrv_release_dirty_bitmap(di->bitmap);
+    }
+
      g_free(di);
qemu_mutex_unlock(&backup_state.backup_mutex);
--
2.20.1

_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to