Like the last comment on the bug says, the message about block migration (drive 
mirroring) indicates that nova is telling libvirt to copy the virtual disks, 
which is not what should happen for ceph or other shared storage.

For ceph just plain live migration should be used, not block migration. It's 
either a configuration issue or a bug in nova.

Josh


From: "Yuming Ma (yumima)" <yum...@cisco.com>
Sent: Apr 3, 2015 1:27 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] live migration fails with image on ceph


Problem: live-migrating a VM, the migration will complete but cause a VM to 
become unstable.  The VM may become unreachable on the network, or go through a 
cycle where it hangs for ~10 mins at a time. A hard-reboot is the only way to 
resolve this.

Related libvirt logs:

2015-03-30 01:18:23.429+0000: 244411: warning : 
qemuMigrationCancelDriveMirror:1383 : Unable to stop block job on 
drive-virtio-disk0

2015-03-30 01:17:41.899+0000: 244408: warning : 
qemuDomainObjEnterMonitorInternal:1175 : This thread seems to be the async job 
owner; entering monitor without asking for a nested job is dangerous


Nova env: 
Kernel : 3.11.0-26-generic

libvirt-bin : 1.1.1-0ubuntu11 

ceph-common : 0.67.9-1precise


Ceph:

Kernel: 3.13.0-36-generic

ceph        : 0.80.7-1precise 

ceph-common : 0.80.7-1precise  



Saw post here (https://bugs.dogfood.paddev.net/mos/+bug/1371130) that this 
might have something to do the libvirt migration with RBD image, but exactly 
how Ceph is related and how to resovle it if anyone had this before. 


Thanks.


— Yuming
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to