Hi,

Unfortunately, your paste of 'bond0' does not reveal how it was created. It
could have been any type, how sure are you that it was LACP ? I do not use
nmcli and you omitted the config file or creation commands, so I think it's
best for you to go to basics:
ip link del bond0
ip link set enp30s0 down
ip link set enp7s0 down
ip link add bond0 type bond *mode 802.3ad*
ip link set enp30s0 master bond0
ip link set enp7s0 master bond0
ip link set enp30s0 up
ip link set enp7s0 up
ip link set bond0 up


.. will work with your VPP configuration in 'lacp' mode. In particular,
creating a bond device by default on Linux will create it round-robin, so
it's important to add the 'mode 802.3ad' when creating it. I don't know
(and cannot tell from your mail) if your nmcli does that or not.
Incidentally, you can inspect the state of the bond driver in Linux, for
example this RR one:
$ sudo ip link del bond0

$ sudo ip link add bond0 type bond

$ sudo cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.10.0-15-amd64


*Bonding Mode: load balancing (round-robin)*MII Status: down
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0


Compare to this LACP signaled one:

$ sudo ip link del bond0
$ sudo ip link add bond0 type bond mode 802.3ad

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.10.0-15-amd64


*Bonding Mode: IEEE 802.3ad Dynamic link aggregation*Transmit Hash Policy:
layer2 (0)
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 0
System MAC address: 00:00:00:00:00:00
bond bond0 has no active aggregator


Maybe that helps you. It's unlikely that the LACP implementation in VPP
itself is broken!

groet,
Pim

On Tue, Jun 14, 2022 at 12:06 PM Chinmaya Aggarwal <chinmaya.agar...@hsc.com>
wrote:

> Hi,
>
> The setup we are trying on has VM1 (running VPP) and VM2 directly
> connected to each other via two physical interface of same nic (which will
> be bonded together). On VM2, we have created bond interface for the two
> physical interface using nmcli.
>
> enp7s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> master bond0 state UP group default qlen 1000
>     link/ether 52:54:00:e7:4b:71 brd ff:ff:ff:ff:ff:ff permaddr
> 52:54:00:16:f6:0c
>
> enp30s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> master bond0 state UP group default qlen 1000
>     link/ether 52:54:00:e7:4b:71 brd ff:ff:ff:ff:ff:ff
>
> bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP group default qlen 1000
>     link/ether 52:54:00:e7:4b:71 brd ff:ff:ff:ff:ff:ff
>     inet 44.44.44.102/24 brd 44.44.44.255 scope global noprefixroute bond0
>        valid_lft forever preferred_lft forever
>     inet6 2001:44:44:44::102/64 scope global
>        valid_lft forever preferred_lft forever
>     inet6 fe80::5054:ff:fee7:4b71/64 scope link
>        valid_lft forever preferred_lft forever
>
> And bond interface configuration on VM1 (running VPP) is same as
> mentioned. lacp mode does not seem to be working while xor mode does.
>
> Also, when we stopped VPP and configured bond interface in linux, we could
> see the bonding work and end to end traffic was working fine.
>
> Any suggestions how we can resolve this?
>
> Thanks and regards,
> Chinmaya Agarwal.
>
> 
>
>

-- 
Pim van Pelt <p...@ipng.nl>
PBVP1-RIPE - http://www.ipng.nl/
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21536): https://lists.fd.io/g/vpp-dev/message/21536
Mute This Topic: https://lists.fd.io/mt/91723467/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to