As far as I know, bridge_mapping is used to let neutron know which bridge should be uses for the (possibly more than one) external network(s).
If someone else could jump in and give a bit more detail, I'd appreciate that, too. Regards, Uwe Am 14.12.2014 um 20:55 schrieb Gonzalo Aguilar Delgado: > Hi Uwe, > > really I started directly with neutron. Never gone with legacy. > > But I suppose there's old config laying around. I still think that > bridge_mapping is needed for VLAN config I use. Every paper describes > GRE config, but I still feel more confortable using VLANs. > > Any other help about this issue? Can someone confirm if I can get rid of > the directives in both configs? > > I suppose I cannot because they get in effect when started the ovs plugin. > > Thank you in advance. > > > > El dom, 14 de dic 2014 a las 8:40 , Uwe Sauter <uwe.sauter...@gmail.com> > escribió: >> Hi, I presume that you upgraded from an older version that used >> nova-network (now called legacy networking). Using neutron means that >> VMs aren't connected to br0 directly any more as there is a whole >> virtual networking infrastructure in place. To give a small overview: >> On a compute node a VM connects to br-int (integration bridge). This >> bridge itself is connected through a virtual cable to br-tun >> (tunneling bridge). That bridge has also assigned a physical interface >> that allows traffic to flow to the network node On the network node >> there also exists a br-tun that has a physical interface attached. >> Through this inferface traffic enters the node. br-tun is virtually >> connected to br-ex that has a separate physical interface attached >> that connects to "the outside", meaning the networking infrastructure >> outside your cloud. I cannot help you with the configuration issue but >> recommend that you familiarize yourself with neutron. Regards, Uwe Am >> 14.12.2014 um 19:36 schrieb Gonzalo Aguilar Delgado: >> >> Hi all, I'm installing a new compute node from scratch and >> reviewing all old config. I've found two setting that seems equal, >> one in ml2 plugin and one in openvswitch. But I don't really >> understand why they are. ovs_neutron_plugin.ini: bridge_mappings = >> default:br0,extnet1:br-ex ml2/ml2_conf.ini: [ovs] bridge_mappings >> = default:br0,extnet1:br-ex For me it's strange the settings are >> in both places. I think this is a result of upgrading without >> taking much care of removing old config. But also it's strange >> that everything works with the bridges br0 and br-ex without >> physical interface. I mean, seems to do nothing but it needs to be >> there. Also I should expect VM be attached to br0 (Default) but >> it's not, they are attached to the br-int (integration bridge), >> for me this is correct. Since it's described here like this: >> https://openstack.redhat.com/Networking_in_too_much_detail And >> works ok. So what's the purporse of these bridges? Here is: >> neutron 2.3.4 nova 2.17.0 Best regards, >> _______________________________________________ Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post >> to : openstack@lists.openstack.org Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> _______________________________________________ Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to >> : openstack@lists.openstack.org Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack