Not sure if its something seen by others. I hit this when I run tempest.scenario.test_network_basic_ops.TestNetworkBasicOps against master:
2015-01-10 17:45:13.227 5350 DEBUG neutron.plugins.ml2.plugin [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] Deleting port e5deb014-0063-4d55-8ee3-5ba3524fee14 delete_port /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:995 2015-01-10 17:45:13.228 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Created new semaphore "db-access" internal_lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:206 2015-01-10 17:45:13.228 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Acquired semaphore "db-access" lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:229 2015-01-10 17:45:13.252 5350 DEBUG neutron.plugins.ml2.plugin [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] Calling delete_port for e5deb014-0063-4d55-8ee3-5ba3524fee14 owned by network:floatingip delete_port /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:1043 2015-01-10 17:45:13.254 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Releasing semaphore "db-access" lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:238 2015-01-10 17:45:13.282 5350 ERROR neutron.api.v2.resource [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] delete failed 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource Traceback (most recent call last): 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/resource.py", line 83, in resource 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource result = method(request=request, **args) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 479, in delete 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/l3_dvr_db.py", line 198, in delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource self).delete_floatingip(context, id) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/l3_db.py", line 1237, in delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource router_id = self._delete_floatingip(context, id) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/l3_db.py", line 902, in _delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource l3_port_check=False) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1050, in delete_port 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource l3plugin.notify_routers_updated(context, router_ids) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/l3_db.py", line 1260, in notify_routers_updated 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource context, list(router_ids), 'disassociate_floatingips', {}) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource TypeError: 'NoneType' object is not iterable Looks like the code is assuming that router_ids can never be None, which clearly is the case here. Is that a bug? Looking elsewhere in the l3_db.py, L3RpcNotifierMixin.notify_routers_updated() does make a check for router_ids (which means that that function does expect it to be empty some times), but the list() is killing it before it reaches that. This backtrace repeats itself many many times in the neutron logs. Thanks for your help. -Sunil
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev