You have been subscribed to a public bug: == Comment: #0 - HARSHA THYAGARAJA - 2016-11-15 03:42:59 == ---Problem Description--- Multicast failing for Shiner interface on Ubuntu 16.10 (bnx2x) Machine Type = S822l1 ---Steps to Reproduce--- On Host: Enable multicast for the test interface 1. echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts 2. ip link set enP5p1s0f1 allmulticast on root@ltciofvtr-s822l1:~# ifconfig enP5p1s0f1 enP5p1s0f1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500 inet 170.1.1.20 netmask 255.255.255.0 broadcast 170.1.1.255 inet6 fe80::9abe:94ff:fe5c:f1c1 prefixlen 64 scopeid 0x20<link> ether 98:be:94:5c:f1:c1 txqueuelen 1000 (Ethernet) RX packets 488 bytes 50254 (50.2 KB) RX errors 0 dropped 442 overruns 0 frame 0 TX packets 25 bytes 2226 (2.2 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 226 memory 0x3d4001000000-3d40017fffff
On Peer: --> catch the Test interface from the multicast group ping 224.0.0.1 -I enP4p1s0f2 | grep 170.1.1.20 ---uname output--- Linux ltciofvtr-s822l1 4.8.0-27-generic #29-Ubuntu SMP Thu Oct 20 21:01:16 UTC 2016 ppc64le ppc64le ppc64le GNU/Linux == Comment: #9 - Kevin W. Rudd - 2016-11-28 15:40:24 == I'm able to replicate this behavior in a KVM environment with Yakkety, so it does not appear to be related to the specific type of interface. >From my testing, it looks like the SO_BINDTODEVICE option is not being honored, so both broadcasts and multicasts are going out the default route device instead of the device specified by -I option to ping. I am unable to replicate this on a xenial host with the same 4.8.0-27 kernel version. so I suspect some other setting is interfering with the ability to bind the output to a specific device. >From my own testing: Xenial: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:15:b7:ba brd ff:ff:ff:ff:ff:ff inet 10.168.122.139/24 brd 10.168.122.255 scope global enp0s6 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe15:b7ba/64 scope link valid_lft forever preferred_lft forever 3: enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:e1:35:be brd ff:ff:ff:ff:ff:ff inet 192.168.122.139/24 brd 192.168.122.255 scope global enp0s1 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fee1:35be/64 scope link valid_lft forever preferred_lft forever # ping -I enp0s6 -i 30 224.0.0.1 PING 224.0.0.1 (224.0.0.1) from 10.168.122.139 enp0s6: 56(84) bytes of data. 64 bytes from 10.168.122.139: icmp_seq=1 ttl=64 time=0.063 ms 64 bytes from 10.168.122.100: icmp_seq=1 ttl=64 time=0.435 ms (DUP!) ^C ... Yakkety: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:be:97:ab brd ff:ff:ff:ff:ff:ff inet 10.168.122.100/24 brd 10.168.122.255 scope global enp0s5 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:febe:97ab/64 scope link valid_lft forever preferred_lft forever 3: enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:30:d0:14 brd ff:ff:ff:ff:ff:ff inet 192.168.122.100/24 brd 192.168.122.255 scope global enp0s1 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe30:d014/64 scope link valid_lft forever preferred_lft forever # ping -I enp0s5 -i 30 224.0.0.1 PING 224.0.0.1 (224.0.0.1) from 10.168.122.100 enp0s5: 56(84) bytes of data. 64 bytes from 192.168.122.100: icmp_seq=1 ttl=64 time=0.115 ms 64 bytes from 192.168.122.139: icmp_seq=1 ttl=64 time=1.65 ms (DUP!) ^C ... Note that the Yakkety system received answers from the hosts on the 192.168.122 network (the network associated with the route for 224.0.0.1). == Comment: #10 - Kevin W. Rudd - 2016-11-30 18:45:59 == This appears to be due to differences in the version of ping on each release. The ping binary from xenial works just fine when copied over to my yakkety test guest. == Comment: #12 - Kevin W. Rudd - 2016-11-30 19:58:54 == It looks like a key change between version 20121221 and 20150815 is ping4_send_probe() and the switch from using sendmsg (which allows passing ancillary data like ipi_ifindex=if_nametoindex("enp0s2")) to using sendto which is not getting any interface information. == Comment: #16 - Kevin W. Rudd - 2016-12-02 14:30:07 == I can confirm that this is not PPC64-specific. I can replicate the same change of "-I" support in an x86_64 yakkety guest. ** Affects: iputils (Ubuntu) Importance: Undecided Assignee: Taco Screen team (taco-screen-team) Status: New ** Tags: architecture-all bugnameltc-148714 severity-medium targetmilestone-inin--- -- ping -I support broken in yakkety version 3:20150815-2ubuntu3 (multicast test failure) https://bugs.launchpad.net/bugs/1646956 You received this bug notification because you are a member of Desktop Packages, which is subscribed to iputils in Ubuntu. -- Mailing list: https://launchpad.net/~desktop-packages Post to : desktop-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~desktop-packages More help : https://help.launchpad.net/ListHelp