Il 14/04/2012 00:32, Eric Blake ha scritto: > But if the world conspires against me, such as libvirt going down, then > qemu completing the reopen, then the guest VM halting itself so that the > qemu process goes away, all before libvirt restarts, then I'm stuck > figuring out whether qemu finished the job (so that when I restart the > guest, I want to pivot the filename) or failed the job (so that when I > restart the guest, I want to revert to the source). To do this, I now > have to create a new file on disk (not a pipe), pass in the fd in > advance, and then call drive-reopen, as well as record that filename as > the location where I will look as part of trying to re-establish > connections with the guest when libvirtd restarts.
Yes. > I'm not quite sure how to expose this to upper-layer management > applications when they are using libvirt transient guests, but that's > not qemu's problem. Do transient guests have persistent storage for them in /var while they are running? >> diff --git a/qapi-schema.json b/qapi-schema.json >> index 0bf3a25..2e5a925 100644 >> --- a/qapi-schema.json >> +++ b/qapi-schema.json >> @@ -1228,6 +1228,13 @@ >> # >> # @format: #optional the format of the new image, default is 'qcow2'. >> # >> +# @witness: A file descriptor name that was passed via getfd. QEMU will >> write > > Mark this #optional Yes. > Question - I know that 'drive-reopen' forces a block_job_cancel_sync() > call before closing the source; how long can that take? Not much; the mirroring job polls the dirty bitmap for new I/O every 100 ms, so it should be more or less comparable with the bdrv_drain_all that is also performed by drive-reopen and blockdev-snapshot-sync. > So that does mean that a call to 'drive-reopen' could indeed > take a very long time from initially sending the monitor command before > I finally get a response of success or failure, and that while the > response will be accurate, the whole intent of this patch is that > libvirt might not be around to get the response, so we want something a > bit more persistent. Does this mean that if we add 'drive-reopen' to > 'transaction', that transaction will be forced to wait for > block_job_cancel_sync? Transactions are already waiting for pending I/O to complete (bdrv_drain_all), and mirroring that I/O to the target (block_job_cancel_sync) won't take much longer. Paolo