Hi all, I thought it might be of interest / get feedback from the operators community about an oddity we experienced with Intel 10 GbE NICs and LACP bonding.
We have Ubuntu 14.04.4 as OS and Intel 10 GbE NICs with the ixgbe Kernel module. We use VLANS for ceph-client, ceph-data, openstack-data, openstack-client networks all on a single LACP bonding of two 10 GbE ports. As bonding hash policy we chose layer3+4 so we can use the full 20 Gb even if only two servers communicate with each other. Typically we check that by using iperf to a single server with -P 4 and see if we exceed the 10 Gb limit (just a few times to check). Due to Ubuntus default of installing the latest Kernel our new host had Kernel 4.2.0 instead of the Kernel 3.16 the other machines had and we noticed that iperf only used 10 Gb. > # cat /proc/net/bonding/bond0 > Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) > > Bonding Mode: IEEE 802.3ad Dynamic link aggregation > Transmit Hash Policy: layer3+4 (1) This was shown on both - Kernel 3.16 and 4.2.0 After downgrading to Kernel 3.16 we got the iperf results we expected. Does anyone have a similar setup? Anyone noticed the same things? To us this looks like a bug in the Kernel (ixgbe module?), or are we misunderstanding the hash policy layer3+4? Any feedback is welcome :) I have not yet posted this to the Kernel ML or Ubuntus ML yet, so if no one here is having a similar setup I'll move over there. I just thought OpenStack ops might be the place were it is most likely that someone has a similar setup :) Greetings -Sascha- _______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators