Hi Wei,
Thanks for sharing. I discovered the issue due to failing test_volumes tests which were previously passing on Trillian's CentOS7 based kvm environment: https://github.com/apache/cloudstack/pull/1837 <https://github.com/apache/cloudstack/pull/1837>Trillian deployed environment are all automated and unless libvirt/qemu packages were changes/updated from upstream repos, there was no change added to Trillian/CentOS7. I'll have a look at actual test as well. One thing we can do is that after detaching the disk, CloudStack can check the domain's XML to see if the disk was actually detached? This would reflect and notify admin/user whether the operation actually was successful? Thanks Wido, I think it could be related to libvirt/OS as well. Regards. ________________________________ From: Wei ZHOU <ustcweiz...@gmail.com> Sent: 21 December 2016 17:26:19 To: dev@cloudstack.apache.org Subject: Re: [DISCUSS][KVM][BUG] Detaching of volume fails on KVM Hi Rohit, I donot think it is an issue in cloudstack. We have this issue from long time ago, it still exist now. I have a testing just now, by virsh command, not cloudstack. ================this is the working one=================== root@KVM015:~# virsh domblklist 39 Target Source ------------------------------------------------ vda /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25 hdc - root@KVM015:~# virsh attach-disk 39 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841 vdb Disk attached successfully root@KVM015:~# root@KVM015:~# virsh domblklist 39 Target Source ------------------------------------------------ vda /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25 vdb /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841 hdc - root@KVM015:~# virsh detach-disk 39 vdb Disk detached successfully root@KVM015:~# virsh domblklist 39 Target Source ------------------------------------------------ vda /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25 hdc - ============this is not working================ root@KVM015:~# virsh domblklist 26 Target Source ------------------------------------------------ vda /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842 vdb /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841 hdc - root@KVM015:~# virsh detach-disk i-2-7585-VM vdb Disk detached successfully root@KVM015:~# virsh domblklist 26 Target Source ------------------------------------------------ vda /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842 vdb /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841 hdc - root@KVM015:~# virsh detach-disk i-2-7585-VM vdb Disk detached successfully root@KVM015:~# virsh domblklist 26 Target Source ------------------------------------------------ vda /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842 vdb /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841 hdc - root@KVM015:~# virsh attach-disk 26 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841 vdb error: Failed to attach disk error: operation failed: target vdb already exists ================end================== I believe this is highly related to the OS and configuration in VM, not hypervisor, or cloudstack. IN my testing, I use Ubuntu 12.04 as hypervisor, it works if vm OS is CentOS7/CentOS6/Ubuntu 16.04, but not working if it is Ubuntu 12.04 . -Wei 2016-12-21 12:00 GMT+01:00 Rohit Yadav <rohit.ya...@shapeblue.com>: > All, > > > Based on results from recent Trillian test runs [1], I've discovered that > on KVM (CentOS7) based detaching a volume fails to update the virt/domain > xml and fails to remove the xml. So, while the agent and cloudstack-mgmt > server succeeds, the entry in the xml is not removed. When the volume is > attached again, we can an error like: > > > Failed to attach volume xxx to VM VM-yyyy; org.libvirt.LibvirtException: > XML error: target 'vdb' duplicated for disk sources > '/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21' > and > '/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'This > is seen in agent logs: > > Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: DEBUG > [kvm.storage.KVMStorageProcessor] > (agentRequest-Handler-2:) (logid:0648ae70) Detaching device: <disk > device='disk' type='file'> > Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: <driver > name='qemu' type='qcow2' cache='none' /> > Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: <source > file='/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e- > a452-43de-8c6b-948dc44aae21'/> > Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: <target dev='vdb' > bus='virtio'/> > Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: </disk> > > While, after above completes. This is still seen in the VM's dumped xml: > <disk type='file' device='disk'> > <driver name='qemu' type='qcow2' cache='none'/> > <source file='/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e- > a452-43de-8c6b-948dc44aae21'/> > <backingStore/> > <target dev='vdb' bus='virtio'/> > <serial>af85ff7ea45243de8c6b</serial> > <alias name='virtio-disk1'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x05' > function='0x0'/> > </disk> > Steps to reproduce: > 1. Deploy a VM, create a data volume disk and attach to the VM. > 2. Detach the volume. > 3. Attach the volume to the same VM again, exception is caught.Thoughts, > comments?[1] https://github.com/apache/cloudstack/pull/1837 > Regards. > > > rohit.ya...@shapeblue.com > www.shapeblue.com<http://www.shapeblue.com> > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > @shapeblue > > > > rohit.ya...@shapeblue.comĀ www.shapeblue.com 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue