Hey, Thanks for the answer. It might be that I did not fully understand the networking concept here. OVS on the host node-3 is as well controlled by opendaylight. And opendaylight sets up the external network as well. But it is still a flat network without segmentation. As far as I understood it, it is the port which connects node-3 with the external network. But networking-odl from the beginning onwards declines to bind this port.
That is my understanding. But I think I am not fully correct. BR Nikolas > -----Original Message----- > From: Rui Zang [mailto:rui.z...@foxmail.com] > Sent: Thursday, August 18, 2016 9:23 AM > To: OpenStack Development Mailing List (not for usage questions) > Cc: Vishal Thapar; Nikolas Hermanns; Michal Skalski; neutron- > d...@lists.opendaylight.org > Subject: Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka > together with ODL-Beryllium > > Hi Nikolas, > > First of all, neutron-...@lists.opendaylight.org (copied) might be a better > place to ask networking-odl questions. > > It seems that the external network you described is not managed by > OpenDaylight, so it failed port binding. > > You probably want to configure multiple mechanism drivers, say if > physnet1 is connected by ovs br-xxx on node-3.domain.tld, you could run > ovs agent on that host and configure bridge_mappings correctly. The > openvswitch mechanism driver would succeed the port binding. > > Thanks, > Zang, Rui > > On 8/17/2016 7:38 PM, Nikolas Hermanns wrote: > > Hey Networking-ODL folks, > > > > I just setup a Mirantis 9.0 release together with Opendaylight Beryllium. > Using networking-odl v2 I see constantly the error: > > 2016-08-17 11:28:07.927 4040 ERROR neutron.plugins.ml2.managers > > [req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Failed to bind > > port faeaa465-6f08-4097-b173-48636cc71539 on host node-3.domain.tld > > for vnic_type normal using segments [{'segmentation_id': None, > > 'physical_network': u'physnet1', 'id': > > u'58d9518c-5664-4099-bcd1-b7818bea853b', 'network_type': u'flat'}] > > 2016-08-17 11:28:07.937 4040 ERROR > networking_odl.ml2.network_topology [req-7e834676-81b4-479b-ad45- > fa39f0fabed3 - - - - -] Network topology element has failed binding port: > > 2016-08-17 11:28:07.937 4040 ERROR > networking_odl.ml2.network_topology Traceback (most recent call last): > > 2016-08-17 11:28:07.937 4040 ERROR > networking_odl.ml2.network_topology File "/usr/local/lib/python2.7/dist- > packages/networking_odl/ml2/network_topology.py", line 117, in bind_port > > 2016-08-17 11:28:07.937 4040 ERROR > networking_odl.ml2.network_topology port_context, vif_type, > self._vif_details) > > 2016-08-17 11:28:07.937 4040 ERROR > networking_odl.ml2.network_topology File "/usr/local/lib/python2.7/dist- > packages/networking_odl/ml2/ovsdb_topology.py", line 172, in bind_port > > 2016-08-17 11:28:07.937 4040 ERROR > networking_odl.ml2.network_topology raise ValueError('Unable to find > any valid segment in given context.') > > 2016-08-17 11:28:07.937 4040 ERROR > networking_odl.ml2.network_topology ValueError: Unable to find any valid > segment in given context. > > 2016-08-17 11:28:07.937 4040 ERROR > networking_odl.ml2.network_topology > > 2016-08-17 11:28:07.938 4040 ERROR > networking_odl.ml2.network_topology [req-7e834676-81b4-479b-ad45- > fa39f0fabed3 - - - - -] Unable to bind port element for given host and valid > VIF > types: > > 2016-08-17 11:28:07.939 4040 ERROR neutron.plugins.ml2.managers > > [req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Failed to bind > > port faeaa465-6f08-4097-b173-48636cc71539 on host node-3.domain.tld > > for vnic_type normal using segments [{'segmentation_id': None, > > 'physical_network': u'physnet1', 'id': > > u'58d9518c-5664-4099-bcd1-b7818bea853b', 'network_type': u'flat'}] > > > > Looking at the code I saw that you can only bind ports which have a valid > segmentation: > > /usr/local/lib/python2.7/dist- > packages/networking_odl/ml2/ovsdb_topology.py(151)bind_port() > > def bind_port(self, port_context, vif_type, vif_details): > > > > port_context_id = port_context.current['id'] > > network_context_id = port_context.network.current['id'] > > # Bind port to the first valid segment > > for segment in port_context.segments_to_bind: > > if self._is_valid_segment(segment): <------- > > # Guest best VIF type for given host > > vif_details = self._get_vif_details( > > vif_details=vif_details, > > port_context_id=port_context_id, > > vif_type=vif_type) > > LOG.debug( > > 'Bind port with valid segment:\n' > > '\tport: %(port)r\n' > > '\tnetwork: %(network)r\n' > > '\tsegment: %(segment)r\n' > > '\tVIF type: %(vif_type)r\n' > > '\tVIF details: %(vif_details)r', > > {'port': port_context_id, > > 'network': network_context_id, > > 'segment': segment, 'vif_type': vif_type, > > 'vif_details': vif_details}) > > port_context.set_binding( > > segment[driver_api.ID], vif_type, vif_details, > > status=n_const.PORT_STATUS_ACTIVE) > > return > > > > raise ValueError('Unable to find any valid segment in given > > context.') > > > > A valid segmentation is defined by: > > [constants.TYPE_LOCAL, constants.TYPE_GRE, constants.TYPE_VXLAN, > > constants.TYPE_VLAN] > > > > The port which I try to bind here is a port on an external network which is > flat since we do not have segmentation for external network. Any idea why it > is changed that I can bind this port? > > > > BR Nikolas > > > > > __________________________________________________________ > ____________ > > ____ OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev