Hi Steven,
Thanks a lot for your help. It works!
Best whishes,
Aleksander
On Fri, Aug 17, 2018 at 07:08 PM, steven luong wrote:
>
>
>
> Aleksander,
>
>
>
>
>
>
>
> I found the CLI bug. You can easily workaround with it. Please set the
> physical interface state up first in your CLI s
Aleksander,
I found the CLI bug. You can easily workaround with it. Please set the physical
interface state up first in your CLI sequence and it will work.
create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface
Hi Steven,
GDB shows that the vlib_process_get_events function always return ~0, except
one time at start and lacp_schedule_periodic_timer is never runs after that.
It's looks the same on both sides.
I have added some debug info. Please look at the log:
### VPP1:9.58 ###
Aleksander,
This problem should be easy to figure out if you can gdb the code. When the
very first slave interface is added to the bonding group via the command “bond
add BondEthernet0 GigabitEthnerneta/0/0/1”,
- The PTX machine schedules the interface with the periodic timer via
lacp_schedule
This configuration is not supported in VPP.
Steven
From: on behalf of Aleksander Djuric
Date: Wednesday, August 15, 2018 at 12:33 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] LACP link bonding issue
In addition.. I have tried to configure LACP in dpdk section of vpp
st
Djuric
Date: Wednesday, August 15, 2018 at 12:11 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] LACP link bonding issue
Hi Steven,
Thanks much for the answer. Yes, these 2 boxes’ interfaces are connected back
to back.
Both sides shows same diagnostics results, here is the output:
v
In addition.. I have tried to configure LACP in dpdk section of vpp
startup.conf.. and I've got the same output:
startup.conf:
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
}
api-trace {
on
}
api-segment {
gid vpp
}
socksvr {
Hi Steven,
Thanks much for the answer. Yes, these 2 boxes’ interfaces are connected back
to back.
Both sides shows same diagnostics results, here is the output:
vpp# sh int
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter
Count
BondEthernet0
sts.fd.io"
Cc: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] LACP link bonding issue
Aleksander
It looks like the LACP packets are not going out to the interfaces as expected
or being dropped. Additional output and trace are needed to determine why.
Please collect the following from
Aleksander
It looks like the LACP packets are not going out to the interfaces as expected
or being dropped. Additional output and trace are needed to determine why.
Please collect the following from both sides.
clear hardware
clear error
wait a few seconds
show hardware
show error
show lacp d
10 matches
Mail list logo