CC: libvirt devel list

Hi Kevin/Peter,

Thank you so much for addressing this issue. I tried out few more things and
here is my analysis:

Even when I removed the readonly option from guest xml I was still seeing the
issue in migration,In the qemu-commandline I could still see auto-read-only
option being set as true by default.

Then I tried giving the auto-read-only as false in the guest xml with
qemu-commandline param, it was actually getting set to false and the migration
worked!


Steps I tried:

1) Started the guest with adding the snippet in the guest xml with parameters
as:

  <qemu:commandline>
    <qemu:arg value='-blockdev'/>
    <qemu:arg 
value='driver=file,filename=/disk_nfs/nfs/migrate_root.qcow2,node-name=drivefile,auto-read-only=false'/>
    <qemu:arg value='-blockdev'/>
    <qemu:arg value='driver=qcow2,file=drivefile,node-name=drive0'/>
    <qemu:arg value='-device'/>
    <qemu:arg 
value='virtio-blk-pci,drive=drive0,id=virtio-disk0,bus=pci.0,addr=0x5'/>
  </qemu:commandline>

2) Started the migration and it worked.

Could anyone please clarify from libvirt side what is the change required?

Thanks,
Anushree-Mathur


On 04/06/25 6:57 PM, Peter Krempa wrote:
On Wed, Jun 04, 2025 at 14:41:54 +0200, Kevin Wolf wrote:
Am 28.05.2025 um 17:34 hat Peter Xu geschrieben:
Copy Kevin.

On Wed, May 28, 2025 at 07:21:12PM +0530, Anushree Mathur wrote:
Hi all,


When I am trying to migrate the guest from host1 to host2 with the command
line as follows:

date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
--undefinesource --persistent --auto-converge --postcopy
--copy-storage-all;date

and it fails with the following error message-

error: internal error: unable to execute QEMU command 'block-export-add':
Block node is read-only

HOST ENV:

qemu : QEMU emulator version 9.2.2
libvirt : libvirtd (libvirt) 11.1.0
Seen with upstream qemu also

Steps to reproduce:
1) Start the guest1
2) Migrate it with the command as

date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
--undefinesource --persistent --auto-converge --postcopy
--copy-storage-all;date

3) It fails as follows:
error: internal error: unable to execute QEMU command 'block-export-add':
Block node is read-only
I assume this is about an inactive block node. Probably on the
destination, but that's not clear to me from the error message.
Yes this would be on the destination. Libvirt exports the nodes on
destination, source connects and does the blockjob.

The destination side is configured the same way as the source side so
if the source disk is configured as read-write the destination should be
as well

Things I analyzed-
1) This issue is not happening if I give --unsafe option in the virsh
migrate command
This is weird; this shouldn't have any impact.

What does this translate to on the QEMU command line?

2) O/P of qemu-monitor command also shows ro as false

virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute": "query-block"
it'd be impossible to execute this on the guest due to timing; you'll
need to collect libvirt debug logs to do that:

https://www.libvirt.org/kbase/debuglogs.html#tl-dr-enable-debug-logs-for-most-common-scenario

I also thing this should be eventually filed in a

}'
{
   "return": [
     {
       "io-status": "ok",
       "device": "",
       "locked": false,
       "removable": false,
       "inserted": {
         "iops_rd": 0,
         "detect_zeroes": "off",
         "image": {
           "virtual-size": 21474836480,
           "filename": "/home/Anu/guest_anu.qcow2",
           "cluster-size": 65536,
           "format": "qcow2",
           "actual-size": 5226561536,
           "format-specific": {
             "type": "qcow2",
             "data": {
               "compat": "1.1",
               "compression-type": "zlib",
               "lazy-refcounts": false,
               "refcount-bits": 16,
               "corrupt": false,
               "extended-l2": false
             }
           },
           "dirty-flag": false
         },
         "iops_wr": 0,
         "ro": false,
         "node-name": "libvirt-1-format",
         "backing_file_depth": 0,
         "drv": "qcow2",
         "iops": 0,
         "bps_wr": 0,
         "write_threshold": 0,
         "encrypted": false,
         "bps": 0,
         "bps_rd": 0,
         "cache": {
           "no-flush": false,
           "direct": false,
           "writeback": true
         },
         "file": "/home/Anu/guest_anu.qcow2"
       },
       "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
       "type": "unknown"
     }
   ],
   "id": "libvirt-26"
}
I assume this is still from the source where the image is still active.
Yes; on the destination the process wouldn't be around long enough to
call 'virsh qemu-monitor-command'

Also it doesn't contain the "active" field yet that was recently
introduced, which could show something about this. I believe you would
still get "read-only": false for an inactive image if it's supposed to
be read-write after the migration completes.

3) Guest doesn't have any readonly

virsh dumpxml guest1 | grep readonly

4) Tried giving the proper permissions also

-rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow
Is this on the destination? did you pre-create it yourself? otherwise
libvirt is pre-creating that image for-non-shared-storage migration
(--copy-storage-all) which should have proper permissions when it's
created

5) Checked for the permission of the pool also that is also proper!

6) Found 1 older bug similar to this, pasting the link for reference:


https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/
What's happening in detail is more of a virsh/libvirt question. CCing
Peter Krempa, he might have an idea.
Please collect the debug log; at least from the destination side of
migration. That should show  how the VM is prepared and qemu invoked.



Reply via email to