Below are the steps i have taken to setup our current testing environment. As of now this environment is up and functioning with multicast working. There are however some oddities with multicast. Testing with omping produces miked results while using our application works without issues. So far I have tried four different ways of configuring the environment and the current config has been working the best.
I have upgraded the OVS to 2.5 and saw that it fixed a majority of the ompoing issues but not the MCAST state table aging issue is still present. This appears to be some type of possible bug with the IGMP snooping implementation within the OVS. *Server Specs:* HP sl210 Gen8 - 128 GB RAM - 2x Intel E5-2690 - 1x Solarflare 10GB fiber - 1x SolarFlare 1GB copper - 2x 500GB RAID 1 *Installation process:* OS/KVM Setup CentOS 7 minimal x64 - yum update -y - yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer qemu-system-x86-2.0.0-1.el7.6.x86_64 epel-release net-tools xauth pciutils -y - sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config - - systemctl stop NetworkManager - yum -y erase NetworkManager - systemctl start libvirtd - systemctl enable libvirtd - chkconfig network on - echo "net.ipv4.ip_forward = 1"|sudo tee -a /etc/sysctl.d/99-kvm.conf - echo "net.bridge.bridge-nf-call-ip6tables = 0"|sudo tee -a /etc/sysctl.d/99-kvm.conf - echo "net.bridge.bridge-nf-call-iptables = 0"|sudo tee -a /etc/sysctl.d/99-kvm.conf - echo "net.bridge.bridge-nf-call-arptables = 0"|sudo tee -a /etc/sysctl.d/99-kvm.conf - echo "net.ipv4.conf.virbr0-nic.rp_filter = 0"|sudo tee -a /etc/sysctl.d/99-kvm.conf - echo "net.ipv4.conf.default.force_igmp_version = 2" |sudo tee -a /etc/sysctl.d/99-kvm.conf - echo "net.ipv4.conf.all.force_igmp_version = 2" |sudo tee -a /etc/sysctl.d/99-kvm.conf - sysctl -p /etc/sysctl.d/99-kvm.conf - modprobe 8021q cat << 'EOF' >/etc/sysconfig/modules/8021q.modules !/bin/sh exec /sbin/modprobe 8021q >/dev/null 2>&1 EOF - chmod +x /etc/sysconfig/modules/8021q.modules *OVS INSTALL* AS ROOT - yum -y install wget openssl-devel gcc make python-devel openssl-devel kernel-devel graphviz kernel-debug-devel autoconf automake rpm-build redhat-rpm-config libtool - adduser ovs - su - ovs AS OVS USER - mkdir -p ~/rpmbuild/SOURCES - wget http://openvswitch.org/releases/openvswitch-2.4.0.tar.gz - cp openvswitch-2.4.0.tar.gz ~/rpmbuild/SOURCES/ - tar xfz openvswitch-2.4.0.tar.gz - sed 's/openvswitch-kmod, //g' openvswitch-2.4.0/rhel/openvswitch.spec > openvswitch-2.4.0/rhel/openvswitch_no_kmod.spec - rpmbuild -bb --nocheck openvswitch-2.4.0/rhel/openvswitch_no_kmod.spec - exit AS ROOT - - mkdir /etc/openvswitch - yum localinstall /home/ovs/rpmbuild/RPMS/x86_64/openvswitch-2.4.0-1.x86_64.rpm -y - yum install policycoreutils-python -y - semanage fcontext -a -t openvswitch_rw_t "/etc/openvswitch(/.*)?" - systemctl start openvswitch.service - chkconfig openvswitch on - Verify install ovs-vsctl -V - init 6 *OVS SETUP* - ovs-vsctl add-br br0 - ovs-vsctl add-port br0 ens1f1d1 vlan_mode=trunk - ovs-vsctl add-br vlan432 br0 432 - ovs-vsctl add-br vlan436 br0 436 - ovs-vsctl add-br vlan448 br0 448 - ovs-vsctl add-br vlan452 br0 452 - ovs-vsctl add-br vlan464 br0 464 - ovs-vsctl set Bridge br0 mcast_snooping_enable=true Since this is only setup for basic functionality testing this server is not patched the way it would be in our PROD environment. A secondary bridge was created on the OVS to facilitate management traffic. - ovs-vsctl add-br mgmt - ovs-vsctl add-port mgmt ovs - - cat << 'EOF' >/etc/sysconfig/network-scripts/ifcfg-mgmt DEVICE=mgmt ONBOOT=yes DEVICETYPE=ovs TYPE=OVSIntPort OVS_BRIDGE=mgmt BOOTPROTO=static IPADDR=10.206.0.31 NETMASK=255.255.252.0 GAREWAY=10.206.0.1 HOTPLUG=no EOF - echo GATEWAY=10.206.0.1 >> /etc/sysconfig/network - ovs-vsctl add-br vlan400 mgmt 400 - verify all interfaces are up using ifconfig *VM CONFIG* This is just a simple config of one of our VMs no specific VM tunnings were made since this is simply to test basic multicast functionality. However there is one odd config that would not be standard in a real environment. There is a macvtap interface taped to the physical mgmt NIC on the host. This allows for the ability to pxe install the VM and access it from outside the network. VLANs 432 436 448 452 464 are the VLANs that multicast traffic will pass through. See attached: vm-config.xml *OVS-SWITCH * See attached ovs-switch.txt *OMPING TESTS* omping -m 239.255.1.90 10.206.50.51 10.206.48.33 *Two VMs on S1ame KVM Host. **(OVS 2.4)* On VLAN 448 between two VMs on the KVM host connected to the same Open vSwitch Mmulticast flows between the VMs cleanly no Duplicate packets or flooding into other VMs. The only issue is that the OVS does not seem to be interpreting the IGMP reports being sent out causing the OVS to age out the the connection on its MCAST table, but multicast traffic still continues to flow cleanly. *One VM on KVM to Another Server On Network. (OVS 2.4)* On VLAN 448 between one VM on the KVM host cand another server on the same network Multicast traffic flows but has issues. The VM on the KVM host receives the traffic cleanly but the external server is receiving duplicate packets. Also the OVS does not seem to be interpreting the IGMP reports being sent out causing the OVS to age out the the connection on its MCAST table, but this time once it ages out the duplicates stop. Running a tcpdump on the receiving host confirms that the packets are truly duplicates. omping running on VM on KVM host 10.206.50.33 : joined (S,G) = (*, 239.255.1.90), pinging 10.206.50.33 : unicast, seq=1, size=69 bytes, dist=0, time=0.466ms 10.206.50.33 : unicast, seq=2, size=69 bytes, dist=0, time=0.582ms 10.206.50.33 : multicast, seq=2, size=69 bytes, dist=0, time=0.587ms 10.206.50.33 : unicast, seq=3, size=69 bytes, dist=0, time=0.643ms 10.206.50.33 : multicast, seq=3, size=69 bytes, dist=0, time=0.638ms 10.206.50.33 : unicast, seq=4, size=69 bytes, dist=0, time=0.535ms 10.206.50.33 : multicast, seq=4, size=69 bytes, dist=0, time=0.539ms 10.206.50.33 : unicast, seq=5, size=69 bytes, dist=0, time=0.461ms 10.206.50.33 : multicast, seq=5, size=69 bytes, dist=0, time=0.536ms omping running on server on the same network 10.206.50.51 : joined (S,G) = (*, 239.255.1.90), pinging 10.206.50.51 : unicast, seq=1, size=69 bytes, dist=0, time=0.255ms 10.206.50.51 : multicast, seq=1, size=69 bytes, dist=0, time=0.705ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.738ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.743ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.746ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.756ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.758ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.761ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.764ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.767ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.770ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.772ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.774ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.777ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.779ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.782ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.789ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.792ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.795ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.798ms 10.206.50.51 : multicast, seq=1 (dup), size=69 bytes, dist=0, time=0.803ms *Running Our Own Application. **(OVS 2.4 and OVS 2.5)* Running our own application that resides in a VM on the KVM host behaves properly. This VM communicates to other physical and virtual servers on the network without issues. There is no duplicate packets being received by other hosts, there is no flooding into other VMs and the OVS is interpreting the IGMP reports being sent an updating the MCAST table properly.
_______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss