On Wed, May 7, 2014 at 10:40 PM, Dietmar Maurer <diet...@proxmox.com> wrote: > We use corosync and multicast cluster communication work fine with the > following configuration (Debian/Proxmox): > > -------config1-(works)--------------- > auto vmbr1 > iface vmbr1 inet static > address 10.11.12.1 > netmask 255.255.255.0 > ovs_type OVSBridge > ovs_ports eth1 > -------------------------------------------- > > But it does not work when I configure an extra OVSIntPort to assign the IP: > > -------config2-(fails)-------------------- > allow-vmbr1 test1 > iface test1 inet static > address 10.11.12.1 > netmask 255.255.255.0 > ovs_type OVSIntPort > ovs_bridge vmbr1 > > auto vmbr1 > iface vmbr1 inet manual > ovs_type OVSBridge > ovs_ports eth1 test1 > -------------------------------------------- The above config is a little different than what we recommend in debian/openvswitch-switch.README.Debian of the repo. It probably works in your case because of a different style of integration of ovs with proxmox with different startup scripts.
> > Please can someone explain the difference between those 2 configuration? And > why does config2 triggers > multicast problems? You can look at the o/p of 'ovs-vsctl show' in both cases to make sure you are creating what you intended? In the second case, looks like you are intending to create a bridge - "vmbr1" and then add an internal port to it called "test1". The ip of 10.11.12.1 is assigned to 'test1'. I am not sure why you want to do it. In either case, just the "multicast" traffic should not have any negative effect. > > # ovs-vswitchd --version > ovs-vswitchd (Open vSwitch) 2.0.90 > Compiled Jan 7 2014 09:51:15 > OpenFlow versions 0x1:0x1 > > > > > _______________________________________________ > discuss mailing list > discuss@openvswitch.org > http://openvswitch.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss