Hi, Cinder w/ GlusterFS backend is hitting the below error as part of test_volume_boot_pattern tempest testcase
(at the end of testcase when it deletes the snap) "/usr/local/ lib/python2.7/dist-packages/libvirt.py", line 792, in blockRebase 2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self) 2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver libvirtError: *Requested operation is not valid: domain is not running* 2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver More details in the LP bug [1] In looking closely at the testcase, it waits for the Instance to turn OFF post which the cleanup starts which tried to delete the snap, but since the cinder volume is attached state (in-use) it lets nova take control of the snap del operation, and nova fails as it cannot do blockRebase as domain is offline. Questions: 1) Is this a valid scenario being tested ? Some say yes, I am not sure, since the test makes sure that instance is OFF before snap is deleted and this doesn't work for fs-backed drivers as they use hyp assisted snap which needs domain to be active. 2) If this is valid scenario, then it means libvirt.py in nova should be modified NOT to raise error, but continue with the snap delete (as if volume was not attached) and take care of the dom xml (so that domain is still bootable post snap deletion), is this the way to go ? Appreciate suggestions/comments thanx, deepak [1]: https://bugs.launchpad.net/cinder/+bug/1441050
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev