Hello , I have grizzly setup , in which i run cinder using IBM storvize 3700. Cinder shows a weird behavior every time i create a volume of some size and attach it to a vm , it shows some different size .
e.g i create a 4gb volume and attach it to a vm it shows of 15gb , this is every-time different sometimes it shows a volume smaller of the size it created. While attaching a volume to a vm sometimes i get error on compute-nodes stating d9f36a440abdf2fdd] [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] *Failed to attach volume 676ef5b1-129b-4d42-b38d-df2005a3d634 at /dev/vdc* 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] Traceback (most recent call last): 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2878, in _attach_volume 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] mountpoint) 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 981, in attach_volume 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] disk_dev) 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] self.gen.next() 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 968, in attach_volume 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] virt_dom.attachDeviceFlags(conf.to_xml(), flags) 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] result = proxy_call(self._autowrap, f, *args, **kwargs) 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in proxy_call 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] rv = execute(f,*args,**kwargs) 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] rv = meth(*args,**kwargs) 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 422, in attachDeviceFlags 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] *libvirtError: internal error unable to execute QEMU command 'device_add': Duplicate ID 'virtio-disk2' for device * Then if i change the mount point from /dev/vdc to something random mount point , it attaches the disk. But still showing different sizes problem remains. Restarting open-iscsi services and reattaching the volume to the vm solves the issue. Attaching my cinder.conf Has anyone encountered this problem , or any help would be really appreciated. -- * With Regards * * Ritesh Nanda * *** * <http://www.ericsson.com/>
cinder.conf
Description: Binary data
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack