Hi! I need the community help with savevm/loadvm.
I run QEMU like this: ./qemu-system-ppc64 \ -drive file=virtimg/fc19_16GB.qcow2 \ -nodefaults \ -m "2048" \ -machine "pseries" \ -nographic \ -vga "none" \ -enable-kvm The disk image is an 16GB qcow2 image. Now I start the guest and do "savevm 1" and "loadvm 1" from the qemu console. Everything works. Then I exit qemu, make sure that the snapshot is there and run QEMU as above plus "-loadvm 1". It fails with: qemu-system-ppc64: qcow2: Loading snapshots with different disk size is not implemented qemu-system-ppc64: Error -95 while activating snapshot '2' on 'scsi0-hd0' The check is added by commit 90b277593df873d3a2480f002e2eb5fe1f8e5277 "qcow2: Save disk size in snapshot header". As I cannot realize the whole idea of the patch, I looked a bit deeper. This is the check: int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id) { [...] if (sn->disk_size != bs->total_sectors * BDRV_SECTOR_SIZE) { error_report("qcow2: Loading snapshots with different disk " "size is not implemented"); ret = -ENOTSUP; goto fail; } My understanding of the patch was that the disk_size should remain 16GB (0x4.0000.0000) as it uses bs->total_sectors and never changes it. And bs->growable is 0 for qcow2 image because it is not really growable. At least the total_sectors value from the qcow2 file header does not change between QEMU starts. However qcow2_save_vmstate() sets bs->growable to 1 for a short time (commit 178e08a58f40dd5aef2ce774fe0850f5d0e56918 from 2009) and this triggers a branch in bdrv_co_do_writev() which changes bs->total_sectors. So when QEMU writes snapshots to the file, the disk_size field of a snapshot has bigger value (for example 0x4.007b.8180). And the check above fails. It does not fail if to do "loadvm" _in_the_same_run_ after "savevm" because QEMU operates with the updated bs->total_sectors. What the proper fix would be? Or it is not a bug at all and I should be using something else for "-loadvm"? Thanks. -- Alexey