[add migration maintainers]
On 24.09.24 15:56, Andrey Drobyshev wrote:
Instead of throwing an assert let's just ignore that flag is already set
and return. We assume that it's going to be safe to ignore. Otherwise
this assert fails when migrating a paused VM back and forth.
Ideally we'd like to have a more sophisticated solution, e.g. not even
scan the nodes which should be inactive at this point.
Signed-off-by: Andrey Drobyshev <andrey.drobys...@virtuozzo.com>
---
block.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/block.c b/block.c
index 7d90007cae..c1dcf906d1 100644
--- a/block.c
+++ b/block.c
@@ -6973,7 +6973,15 @@ static int GRAPH_RDLOCK
bdrv_inactivate_recurse(BlockDriverState *bs)
return 0;
}
- assert(!(bs->open_flags & BDRV_O_INACTIVE));
+ if (bs->open_flags & BDRV_O_INACTIVE) {
+ /*
+ * Return here instead of throwing assert as a workaround to
+ * prevent failure on migrating paused VM.
+ * Here we assume that if we're trying to inactivate BDS that's
+ * already inactive, it's safe to just ignore it.
+ */
+ return 0;
+ }
/* Inactivate this node */
if (bs->drv->bdrv_inactivate) {
I doubt that this a correct way to go.
As far as I understand, "inactive" actually means that "storage is not belong to
qemu, but to someone else (another qemu process for example), and may be changed
transparently". In turn this means that Qemu should do nothing with inactive disks. So the
problem is that nobody called bdrv_activate_all on target, and we shouldn't ignore that.
Hmm, I see in process_incoming_migration_bh() we do call bdrv_activate_all(),
but only in some scenarios. May be, the condition should be less strict here.
Why we need any condition here at all? Don't we want to activate block-layer on
target after migration anyway?
--
Best regards,
Vladimir