On 1/20/21 4:44 AM, Daniel P. Berrangé wrote: > savevm, loadvm and delvm are some of the few HMP commands that have never > been converted to use QMP. The reasons for the lack of conversion are > that they blocked execution of the event thread, and the semantics > around choice of disks were ill-defined. > > Despite this downside, however, libvirt and applications using libvirt > have used these commands for as long as QMP has existed, via the > "human-monitor-command" passthrough command. IOW, while it is clearly > desirable to be able to fix the problems, they are not a blocker to > all real world usage. > > Meanwhile there is a need for other features which involve adding new > parameters to the commands. This is possible with HMP passthrough, but > it provides no reliable way for apps to introspect features, so using > QAPI modelling is highly desirable. > > This patch thus introduces new snapshot-{load,save,delete} commands to > QMP that are intended to replace the old HMP counterparts. The new > commands are given different names, because they will be using the new > QEMU job framework and thus will have diverging behaviour from the HMP > originals. It would thus be misleading to keep the same name. > > While this design uses the generic job framework, the current impl is > still blocking. The intention that the blocking problem is fixed later. > None the less applications using these new commands should assume that > they are asynchronous and thus wait for the job status change event to > indicate completion. > > In addition to using the job framework, the new commands require the > caller to be explicit about all the block device nodes used in the > snapshot operations, with no built-in default heuristics in use. > > Note that the existing "query-named-block-nodes" can be used to query > what snapshots currently exist for block nodes. > > Acked-by: Markus Armbruster <arm...@redhat.com> > Signed-off-by: Daniel P. Berrangé <berra...@redhat.com> > ---
> +++ b/migration/savevm.c > @@ -2968,7 +2968,7 @@ bool load_snapshot(const char *name, const char > *vmstate, > if (ret == 0) { > error_setg(errp, "Snapshot '%s' does not exist in one or more > devices", > name); > - return -1; > + return false; This hunk belongs in 6/11. > +++ b/qapi/job.json > @@ -22,10 +22,17 @@ > # > # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1) > # > +# @snapshot-load: snapshot load job type, see "snapshot-load" (since 5.2) > +# > +# @snapshot-save: snapshot save job type, see "snapshot-save" (since 5.2) > +# > +# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since > 5.2) s/5.2/6.0/g > +++ b/qapi/migration.json > @@ -1843,3 +1843,124 @@ > # Since: 5.2 > ## > { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' } > + > +## > +# @snapshot-save: > +# > +# Save a VM snapshot > +# > +# @job-id: identifier for the newly created job > +# @tag: name of the snapshot to create > +# @vmstate: block device node name to save vmstate to > +# @devices: list of block device node names to save a snapshot to > +# > +# Applications should not assume that the snapshot save is complete > +# when this command returns. The job commands / events must be used > +# to determine completion and to fetch details of any errors that arise. > +# > +# Note that execution of the guest CPUs may be stopped during the > +# time it takes to save the snapshot. A future version of QEMU > +# may ensure CPUs are executing continuously. > +# > +# It is strongly recommended that @devices contain all writable > +# block device nodes if a consistent snapshot is required. > +# > +# If @tag already exists, an error will be reported > +# > +# Returns: nothing > +# > +# Example: > +# > +# -> { "execute": "snapshot-save", > +# "data": { > +# "job-id": "snapsave0", > +# "tag": "my-snap", > +# "vmstate": "disk0", > +# "devices": ["disk0", "disk1"] > +# } > +# } > +# <- { "return": { } } > +# > +# Since: 6.0 The example would be wise to further show waiting for the job completion event. > +## > +{ 'command': 'snapshot-save', > + 'data': { 'job-id': 'str', > + 'tag': 'str', > + 'vmstate': 'str', > + 'devices': ['str'] } } > + > +## > +# @snapshot-load: > +# > +# Load a VM snapshot > +# > +# @job-id: identifier for the newly created job > +# @tag: name of the snapshot to load. > +# @vmstate: block device node name to load vmstate from > +# @devices: list of block device node names to load a snapshot from > +# > +# Applications should not assume that the snapshot load is complete > +# when this command returns. The job commands / events must be used > +# to determine completion and to fetch details of any errors that arise. > +# > +# Note that execution of the guest CPUs will be stopped during the > +# time it takes to load the snapshot. > +# > +# It is strongly recommended that @devices contain all writable > +# block device nodes that can have changed since the original > +# @snapshot-save command execution. > +# > +# Returns: nothing > +# > +# Example: > +# > +# -> { "execute": "snapshot-load", > +# "data": { > +# "job-id": "snapload0", > +# "tag": "my-snap", > +# "vmstate": "disk0", > +# "devices": ["disk0", "disk1"] > +# } > +# } > +# <- { "return": { } } > +# Here as well. > +# Since: 6.0 > +## > +{ 'command': 'snapshot-load', > + 'data': { 'job-id': 'str', > + 'tag': 'str', > + 'vmstate': 'str', > + 'devices': ['str'] } } > + > +## > +# @snapshot-delete: > +# > +# Delete a VM snapshot > +# > +# @job-id: identifier for the newly created job > +# @tag: name of the snapshot to delete. > +# @devices: list of block device node names to delete a snapshot from > +# > +# Applications should not assume that the snapshot save is complete > +# when this command returns. The job commands / events must be used > +# to determine completion and to fetch details of any errors that arise. > +# > +# Returns: nothing > +# > +# Example: > +# > +# -> { "execute": "snapshot-delete", > +# "data": { > +# "job-id": "snapdelete0", > +# "tag": "my-snap", > +# "devices": ["disk0", "disk1"] > +# } > +# } > +# <- { "return": { } } > +# > +# Since: 6.0 > +## > +{ 'command': 'snapshot-delete', > + 'data': { 'job-id': 'str', > + 'tag': 'str', > + 'devices': ['str'] } } > diff --git a/slirp b/slirp > index 8f43a99191..ce94eba204 160000 > --- a/slirp > +++ b/slirp > @@ -1 +1 @@ > -Subproject commit 8f43a99191afb47ca3f3c6972f6306209f367ece > +Subproject commit ce94eba2042d52a0ba3d9e252ebce86715e94275 Oops. This shouldn't be here. > diff --git a/tests/qemu-iotests/310 b/tests/qemu-iotests/310 > new file mode 100755 > index 0000000000..41cec9ea8d > --- /dev/null > +++ b/tests/qemu-iotests/310 > @@ -0,0 +1,385 @@ > +#!/usr/bin/env bash > +# > +# Test which nodes are involved in internal snapshots > +# > +# Copyright (C) 2020 Red Hat, Inc. Worth also mentioning 2021? > + # Next events vary depending on job type and > + # whether it succeeds or not. > + for evname in $@ > + do > + _wait_event $QEMU_HANDLE $evname TAB damage throughout this file. > +echo > +echo "===== Snapshot dual qcow2 image =====" > +echo > + > +# We can snapshot multiple qcow2 disks at the same time extra space > +++ b/tests/qemu-iotests/group > @@ -318,4 +318,5 @@ > 307 rw quick export > 308 rw > 309 rw auto quick > +310 rw quick > 312 rw quick Vladimir's work to get rid of the 'group' file will be a trivial merge conflict, if it lands first. Nearly there! -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org