> > Andrey, > > Can you give instructions on how to reproduce please? >
Please find answers inline: > - qemu.git codebase (if you have any patches relative to a > given commit id, please provide the patches). rolled to bare 2.1-release to reproduce, for 3.10 I am hitting issue with and without patches from previous message. With 3.16.0, I am not able to reproduce issue anymore on bare 2.1. Both virtio-dp and regular virtio-blk are affected, though first option hit the issue always at a single migration check with HV timer, so I`d suggest to test against it (as in mine args string below). With hvapic option set emulator tends to hit issue more frequently than without it. > - qemu command line. qemu-system-x86_64 -enable-kvm -name vm29107 -S -machine pc-i440fx-2.1,accel=kvm,usb=off -cpu qemu64,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000 -bios /usr/share/seabios/bios.bin-1.7.4 -m 512 -realtime mlock=off -smp 12,sockets=1,cores=12,threads=12 -numa node,nodeid=0,cpus=0-11,mem=512 -uuid 53646494-fe6c-4b5d-b6d0-c333b4f20582 -no-user-config -nodefaults -device sga -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm29107.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device usb-ehci,id=usb,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=rbd:dev-rack2/vm29107-WfV:id=qemukvm:key=secret:auth_supported=cephx\;none:mon_host=10.6.0.1\:6789\;10.6.0.3\:6789\;10.6.0.4\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=writeback,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fds=26:27:28:29,id=hostnet0,vhost=on,vhostfds=30:31:32:33 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet0,id=net0,mac=52:54:00:10:06:9a,bus=pci.0,addr=0x3 -netdev tap,fds=34:35:36:37,id=hostnet1,vhost=on,vhostfds=38:39:40:41 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet1,id=net1,mac=52:54:00:10:06:9b,bus=pci.0,addr=0x4 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/vm29107.sock,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.1 -vnc 0.0.0.0:0 -k en-us -device VGA,id=video0,bus=pci.0,addr=0x2 -object iothread,id=vm29107blk0 -set device.virtio-disk0.config-wce=off -set device.virtio-disk0.scsi=off -set device.virtio-disk0.iothread=vm29107blk0 -m 512,slots=62,maxmem=16384M -object memory-backend-ram,id=mem0,size=256M -device pc-dimm,id=dimm0,node=0,memdev=mem0 -object memory-backend-ram,id=mem1,size=256M -device pc-dimm,id=dimm1,node=0,memdev=mem1 -object memory-backend-ram,id=mem2,size=256M -device pc-dimm,id=dimm2,node=0,memdev=mem2 -object memory-backend-ram,id=mem3,size=256M -device pc-dimm,id=dimm3,node=0,memdev=mem3 -object memory-backend-ram,id=mem4,size=256M -device pc-dimm,id=dimm4,node=0,memdev=mem4 -object memory-backend-ram,id=mem5,size=256M -device pc-dimm,id=dimm5,node=0,memdev=mem5 -object memory-backend-ram,id=mem6,size=256M -device pc-dimm,id=dimm6,node=0,memdev=mem6 -object memory-backend-ram,id=mem7,size=256M -device pc-dimm,id=dimm7,node=0,memdev=mem7 -object memory-backend-ram,id=mem8,size=256M -device pc-dimm,id=dimm8,node=0,memdev=mem8 -object memory-backend-ram,id=mem9,size=256M -device pc-dimm,id=dimm9,node=0,memdev=mem9 -object memory-backend-ram,id=mem10,size=256M -device pc-dimm,id=dimm10,node=0,memdev=mem10 -object memory-backend-ram,id=mem11,size=256M -device pc-dimm,id=dimm11,node=0,memdev=mem11 -object memory-backend-ram,id=mem12,size=256M -device pc-dimm,id=dimm12,node=0,memdev=mem12 -object memory-backend-ram,id=mem13,size=256M -device pc-dimm,id=dimm13,node=0,memdev=mem13 > - how to recreate guest disk contents. In my case, it is just bare installation of W2008R2 x64, I can share it off-list if necessary. > - how to recreate workload at which point migration > fails. Migration does not fail itself, the VM` disk seemingly is. Reproduction is quite simple - just boot up a VM, migrate it N times and try to log in. Failed case will hang on the progress screen (or you may log in before and check the disk availability by any other convenient method). > - migration command relative to last item. It is p2p libvirt live migration: for i in $(seq 1 6) ; do virsh migrate --live --persistent --undefinesource vm29107 qemu+tcp://twin2/system ; sleep 5; ssh twin2 "virsh migrate --live --persistent --undefinesource vm29107 qemu+tcp://twin0/system" ; done > > > Thanks