@ddstreet here's how I reproduced: I created a VirtualBox VM with Xenial and 3 interfaces: enp0s3 and enp0s8 on an internal network, enp0s9 on a bridge to my LAN). Then I applied each of the 4 configurations below and ran "ifup -a". Try it and you'll see the same behavior.
You are correct: the proper way to bring up the bond is to bring up it's slaves. Running ifup on the bond just hangs as you have already established. This is an entirely different bug I guess, but not my main concern right now. I've had to use "bond-master" for the enp0sX interfaces and set "bond-slaves none" for the bond to get it to work. Guess we need support for setting bond-master and bond-slaves at the same time to be bale to bring up the bond both by bringing up a slave or by bringing up the bond itself. Just to summarize: it is a duplicate of the other bug and it is fixed by your patch! ========================================== auto lo iface lo inet loopback auto enp0s9 iface enp0s9 inet static mtu 1500 address 192.168.1.9 gateway 192.168.1.1 netmask 255.255.255.0 dns-nameservers 1.1.1.1 auto enp0s3 iface enp0s3 inet manual mtu 1500 bond-master bo-adm bond-primary enp0s3 auto enp0s8 iface enp0s8 inet manual mtu 1500 bond-master bo-adm auto bo-adm iface bo-adm inet static mtu 1500 address 10.10.10.3 netmask 255.255.0.0 bond-miimon 100 bond-mode active-backup bond-slaves none bond-downdelay 200 bond-updelay 200 auto bo-adm.2 iface bo-adm.2 inet static mtu 1500 address 10.11.10.3 netmask 255.255.0.0 vlan-raw-device bo-adm ========================================== auto lo iface lo inet loopback auto enp0s9 iface enp0s9 inet static mtu 1500 address 192.168.1.9 gateway 192.168.1.1 netmask 255.255.255.0 dns-nameservers 1.1.1.1 auto enp0s3 iface enp0s3 inet manual mtu 1500 bond-master bo-adm bond-primary enp0s3 auto enp0s8 iface enp0s8 inet manual mtu 1500 bond-master bo-adm auto bo-adm iface bo-adm inet manual mtu 1500 bond-miimon 100 bond-mode active-backup bond-slaves none bond-downdelay 200 bond-updelay 200 auto bo-adm.2 iface bo-adm.2 inet static mtu 1500 address 10.11.10.3 netmask 255.255.0.0 vlan-raw-device bo-adm ========================================== auto lo iface lo inet loopback auto enp0s9 iface enp0s9 inet static mtu 1500 address 192.168.1.9 gateway 192.168.1.1 netmask 255.255.255.0 dns-nameservers 1.1.1.1 auto enp0s3 iface enp0s3 inet manual mtu 1500 auto enp0s3.2 iface enp0s3.2 inet static mtu 1500 address 10.11.10.3 netmask 255.255.0.0 vlan-raw-device enp0s3 ========================================== auto lo iface lo inet loopback auto enp0s9 iface enp0s9 inet static mtu 1500 address 192.168.1.9 gateway 192.168.1.1 netmask 255.255.255.0 dns-nameservers 1.1.1.1 auto enp0s3 iface enp0s3 inet static mtu 1500 address 10.10.10.3 netmask 255.255.0.0 bond-miimon 100 bond-mode active-backup bond-slaves none bond-downdelay 200 bond-updelay 200 auto enp0s3.2 iface enp0s3.2 inet static mtu 1500 address 10.11.10.3 netmask 255.255.0.0 vlan-raw-device enp0s3 -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ifupdown in Ubuntu. https://bugs.launchpad.net/bugs/1759573 Title: vlan on top of untagged network won't start Status in ifupdown package in Ubuntu: New Status in vlan package in Ubuntu: New Bug description: Due to an upgrade (of probably of the ifupdown or vlan package), this specific network configuration no longer comes up automatically: 1) Two or more network interfaces bonded 2) An untagged network configured on that bond 3) A vlan on top of that untagged network What does come up automatically: 1) A single (e.g. unbonded) network interface with an untagged network configured and a vlan on top of that network 2) Two or more network interfaces bonded with a vlan on top of that untagged bond An exact example of the configuration that doesn't work is provided below. It fails to come up correctly, both during boot and manually. The problem seems to be a blocking dependency loop between the bond and the vlan. As recommended in https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1636708/comments/13 we added dependency ordering using ifup@.service systemd units for all 4 interfaces, but this did not affect the behaviour in any way. Perhaps related to LP bug 1573272 or bug 1636708 ? ========================================================== Interface configuration ========================================================== auto eno1 iface eno1 inet manual mtu 1500 bond-master bond1 bond-primary eno1 auto eno2 iface eno2 inet manual mtu 1500 bond-master bond1 auto bond1 iface bond1 inet static mtu 1500 address 10.10.10.3 bond-miimon 100 bond-mode active-backup bond-slaves none bond-downdelay 0 bond-updelay 0 dns-nameservers 10.10.10.1 gateway 10.10.10.1 netmask 255.255.0.0 auto bond1.2 iface bond1.2 inet static mtu 1500 address 10.11.10.3 netmask 255.255.0.0 vlan-raw-device bond1 ========================================================== When bringing up the bond ========================================================== # ifup bond1 & Waiting for a slave to join bond1 (will timeout after 60s) # ps afx (...) ifup bond1 \_ /bin/sh -c /bin/run-parts --exit-on-error /etc/network/if-pre-up.d \_ /bin/run-parts --exit-on-error /etc/network/if-pre-up.d \_ /bin/sh /etc/network/if-pre-up.d/ifenslave (...) /lib/systemd/systemd-udevd \_ /lib/systemd/systemd-udevd \_ /bin/sh /lib/udev/vlan-network-interface \_ /bin/sh /etc/network/if-pre-up.d/vlan \_ ifup bond1 (...) ==> After waiting 60 seconds: # ip link | grep -E 'eno[1|2]|bond1*' eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 bond1: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 bond1.2@bond1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000 ========================================================== When bringing up a slave ========================================================== # ifup eno1 Waiting for bond master bond1 to be ready # ps afx (...) /lib/systemd/systemd-udevd \_ /lib/systemd/systemd-udevd \_ /bin/sh /lib/udev/vlan-network-interface \_ /bin/sh /etc/network/if-pre-up.d/vlan \_ ifup bond1 \_ /bin/sh -c /bin/run-parts --exit-on-error /etc/network/if-pre-up.d \_ /bin/run-parts --exit-on-error /etc/network/if-pre-up.d \_ /bin/sh /etc/network/if-pre-up.d/ifenslave \_ /bin/sh /lib/udev/vlan-network-interface \_ /bin/sh /etc/network/if-pre-up.d/vlan \_ ifup bond1 (...) # ip link | grep -E 'eno[1|2]|bond1*' eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000 eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 ========================================================== Only workaround that works ========================================================== # ifup eno1 Waiting for bond master bond1 to be ready # kill $(ps -ef | grep 'ifup bond1' | sed -n 2p | awk '{ print $2}') # ifup eno2 # ip link | grep -E 'eno[1|2]|bond1*' eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000 eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000 bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 bond1.2@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1759573/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp