Yeah indeed it looks like the raise was used improperly. Probably a better thing to do here would be to use the excutils.save_and_reraise_exception() which provides a context manager that you can control if the exception should be re-raised or not.
** Changed in: nova Status: New => Triaged ** Changed in: nova Importance: Undecided => Medium ** Also affects: nova/stein Importance: Undecided Status: New ** Also affects: nova/queens Importance: Undecided Status: New ** Also affects: nova/rocky Importance: Undecided Status: New ** Changed in: nova/queens Status: New => Triaged ** Changed in: nova/rocky Status: New => Triaged ** Changed in: nova/queens Importance: Undecided => Medium ** Changed in: nova/stein Status: New => Triaged ** Changed in: nova/stein Importance: Undecided => Medium ** Changed in: nova/rocky Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1825882 Title: Virsh disk attach errors silently ignored Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) queens series: Triaged Status in OpenStack Compute (nova) rocky series: Triaged Status in OpenStack Compute (nova) stein series: Triaged Bug description: Description =========== The following commit (1) is causing volume attachments which fail due to libvirt device attach erros to be silently ignored and Nova report the attachment as successful. It seems that the original intention of the commit was to log a condition and re-raise the exeption, but if the exception is of type libvirt.libvirtError and does not contain the searched pattern, the exception is ignored. If you unindent the raise statement, errors are reported again. In our case we had ceph/apparmor configuration problems in compute nodes which prevented virsh attaching the device; volumes appeared as successfully attached but the corresponding block device missing in guests VMs. Other libvirt attach error conditions are ignored also, as when you have already occuppied device names (i.e. 'Target vdb already exists', device is busy, etc.) (1) https://github.com/openstack/nova/commit/78891c2305bff6e16706339a9c5eca99a84e409c Steps to reproduce ================== This is somewhat hacky, but is a quick way to provoke a virsh attach error: - virsh detach-disk <domain> vdb - update nova & cinder DB as if volume is detached - re-attach volume - volume is marked as attached, but VM block device is missing Expected result =============== - Error 'libvirtError: Requested operation is not valid: target vdb already exists' should be raised, and volume not attached Actual result ============= - Attach successful but virsh block device not created Environment =========== - Openstack version Queens To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1825882/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp