--- Begin Message ---
> +* Introduce volume_support_qemu_snapshot() plugin method
> + This method is used to known if the a snapshot need to be done
> by qemu
> + or by the storage api.
> + returned values are :
> + 'internal' : support snapshot with qemu internal snapshot
> + 'external' : support snapshot with qemu external snapshot
> + undef : don't support qemu snapshot
>>The naming, description and returned values seem a bit of a mixed bag
>>here.
>>Also because "internal", from the POV of the storage, happens
>>"outside" of the storage.
yes, indeed, I was a little bit out of idea for the naming, sorry :/
>>Also consider that technically the *beginning* of a snapshot chain
>>can
>>even have a raw format, so I also wouldn't specifically pin the
>>ability
>>to create a new volume using a backing volume on the current *v
>olume*.
yes, Fabian wanted to use only qcow2 for the base image for now, to
avoid to mix both format in the chain
>>As a side note: Looking at the LVM plugin, it now reports "qcow2"
>>support, but refuses it unless an option is set. The list of which
>>formats are supported is currently not storage-config dependent,
>>which
>>is a bit unfortunate there.
do you mean my last follow-up about "external-snapshots" option for lvm
? Because Fabian && Thomas wanted it as safe-guard until it's still
experimental
>>How about one of these (a bit long but bear with me):
>> supports_snapshots_as_backed_volumes($scfg)
>> The storage:
>> - Can create volumes with backing volumes (required for NOT-
>>running
>> VMs)
>> - Allows renaming a volume into a snapshot such that calling
>> `volume_snapshot` then recreates the original name... (BUT! See
>> note [1] below for why I think this should not be a
>>requirement!)
>> (for running VMs)
>>combined with
>>
>> is_format_available($scfg, "qcow2")
>>
>>(to directly check for the optional support as in the LVM plugin
early)
>>qemu-server's do_snapshot_type() then does:
>>
>> "storage" if
>> is tpmstate
>> OR is not running
>> OR format isn't qcow2 (this case would be new in qemu-server)
>> "external" if
>> supports_snapshots_as_backed_volumes() is true
>> AND is_format_available($scfg, "qcow2")
>> "internal" otherwise
you also have the case for rbd, where krbd need to be done at storage
level, but rbd need to be done by qemu when it's running.
So I think it's more something like :
-- OR format isn't qcow2 (this case would be new in qemu-server)
++ OR qemu block driver support .bdrv_snapshot_create
(currently it's only qcow2 && rbd :
https://github.com/search?q=repo%3Aqemu%2Fqemu%20%20.bdrv_snapshot_create%20&type=code)
>>Notes:
>>[1] Do we really need to rename the volume *outside* of the storage?
>>Why does eg. the LVM plugin need to distinguish between running and
>>not
>>running at all? All it does is shove some `/dev/` names around, after
>>all. We should be able to go through the change-file/reopen code in
>>qemu-server regardless of when/who/where renames the files, no?
>>
>>Taking a snapshot of a running VM does:
>> 1. rename volume "into" snapshot from out of qemu-server
>> 2. tell qemu to reopen the files under the new names
>> 3. call volume_snapshot which:
>> - creates volume with the original name and the snapshot as
>>backing
>> 4. have qemu open the disk by the original name for blockdev-
snapshot
Can't we make this:
>> 1. tell qemu to reopen the original files with different *node ids*
>> (since the clashing of those is the only issue we run into when
>> simply dropping the $running case and skipping the early
>>rename...)
>> 2. call volume_snapshot without a $running parameter
>> 3. continue as before (point 4 above)
ok, so we could try to
1) reopen the current volume (vm-disk-..) with nodename=volname+snap
before renaming it,
2) do the volume_snapshot in storage (rename the current volume to snap
, create a new current volume with the snap backing)
3) add the new current volume blockdev + reopen it with blockdev-
snapshot.
Here, I'm not sure if it's working, because AFAIR, when you call
blockdev-snapshot,
mon_cmd(
$vmid, 'blockdev-snapshot',
node => $snap_fmt_blockdev->{'node-name'},
overlay => $new_fmt_blockdev->{'node-name'},
);
it's reading the filename in the snap blockdev node (and in this case,
it's still vm-disk-...) and write it in the backing_file info of the
new overlay node.
I think I have tried in past, but I'm not 100% sure. I'll try to test
it again this week.
I'm going on holiday for 3 weeks this Friday, so I'll not have time to
sent patch before nest, but feel free to change|improve|rewrite my code
by something better :)
--- End Message ---
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel