Am 28.05.2025 um 17:34 hat Peter Xu geschrieben:
> Copy Kevin.
> 
> On Wed, May 28, 2025 at 07:21:12PM +0530, Anushree Mathur wrote:
> > Hi all,
> > 
> > 
> > When I am trying to migrate the guest from host1 to host2 with the command
> > line as follows:
> > 
> > date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> > --undefinesource --persistent --auto-converge --postcopy
> > --copy-storage-all;date
> > 
> > and it fails with the following error message-
> > 
> > error: internal error: unable to execute QEMU command 'block-export-add':
> > Block node is read-only
> > 
> > HOST ENV:
> > 
> > qemu : QEMU emulator version 9.2.2
> > libvirt : libvirtd (libvirt) 11.1.0
> > Seen with upstream qemu also
> > 
> > Steps to reproduce:
> > 1) Start the guest1
> > 2) Migrate it with the command as
> > 
> > date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> > --undefinesource --persistent --auto-converge --postcopy
> > --copy-storage-all;date
> > 
> > 3) It fails as follows:
> > error: internal error: unable to execute QEMU command 'block-export-add':
> > Block node is read-only

I assume this is about an inactive block node. Probably on the
destination, but that's not clear to me from the error message.

> > Things I analyzed-
> > 1) This issue is not happening if I give --unsafe option in the virsh
> > migrate command

What does this translate to on the QEMU command line?

> > 2) O/P of qemu-monitor command also shows ro as false
> > 
> > virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute": "query-block"
> > }'
> > {
> >   "return": [
> >     {
> >       "io-status": "ok",
> >       "device": "",
> >       "locked": false,
> >       "removable": false,
> >       "inserted": {
> >         "iops_rd": 0,
> >         "detect_zeroes": "off",
> >         "image": {
> >           "virtual-size": 21474836480,
> >           "filename": "/home/Anu/guest_anu.qcow2",
> >           "cluster-size": 65536,
> >           "format": "qcow2",
> >           "actual-size": 5226561536,
> >           "format-specific": {
> >             "type": "qcow2",
> >             "data": {
> >               "compat": "1.1",
> >               "compression-type": "zlib",
> >               "lazy-refcounts": false,
> >               "refcount-bits": 16,
> >               "corrupt": false,
> >               "extended-l2": false
> >             }
> >           },
> >           "dirty-flag": false
> >         },
> >         "iops_wr": 0,
> >         "ro": false,
> >         "node-name": "libvirt-1-format",
> >         "backing_file_depth": 0,
> >         "drv": "qcow2",
> >         "iops": 0,
> >         "bps_wr": 0,
> >         "write_threshold": 0,
> >         "encrypted": false,
> >         "bps": 0,
> >         "bps_rd": 0,
> >         "cache": {
> >           "no-flush": false,
> >           "direct": false,
> >           "writeback": true
> >         },
> >         "file": "/home/Anu/guest_anu.qcow2"
> >       },
> >       "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
> >       "type": "unknown"
> >     }
> >   ],
> >   "id": "libvirt-26"
> > }

I assume this is still from the source where the image is still active.

Also it doesn't contain the "active" field yet that was recently
introduced, which could show something about this. I believe you would
still get "read-only": false for an inactive image if it's supposed to
be read-write after the migration completes.

> > 
> > 3) Guest doesn't have any readonly
> > 
> > virsh dumpxml guest1 | grep readonly
> > 
> > 4) Tried giving the proper permissions also
> > 
> > -rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow2
> > 
> > 5) Checked for the permission of the pool also that is also proper!
> > 
> > 6) Found 1 older bug similar to this, pasting the link for reference:
> > 
> > 
> > https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/

What's happening in detail is more of a virsh/libvirt question. CCing
Peter Krempa, he might have an idea.

Kevin


Reply via email to