This configuration is not supported in VPP.

Steven

From: <vpp-dev@lists.fd.io> on behalf of Aleksander Djuric 
<aleksander.dju...@gmail.com>
Date: Wednesday, August 15, 2018 at 12:33 AM
To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] LACP link bonding issue

In addition.. I have tried to configure LACP in dpdk section of vpp 
startup.conf.. and I've got the same output:

startup.conf:
unix {
   nodaemon
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /run/vpp/cli.sock
   gid vpp
}

api-trace {
   on
}

api-segment {
   gid vpp
}

socksvr {
   default
}

dpdk {
   socket-mem 2048
   num-mbufs 131072

   dev 0000:0a:00.0
   dev 0000:0a:00.1
   dev 0000:0a:00.2
   dev 0000:0a:00.3

   vdev eth_bond0,mode=4,slave=0000:0a:00.0,slave=0000:0a:00.1,xmit_policy=l23
}

plugins {
   path /usr/lib/vpp_plugins
}

vpp# sh int
             Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter 
         Count
BondEthernet0                     5     down         9000/0/0/0
GigabitEtherneta/0/0              1  bond-slave      9000/0/0/0
GigabitEtherneta/0/1              2  bond-slave      9000/0/0/0
GigabitEtherneta/0/2              3     down         9000/0/0/0
GigabitEtherneta/0/3              4     down         9000/0/0/0
local0                            0     down          0/0/0/0
vpp# set interface ip address BondEthernet0 10.0.0.2/24
vpp# set interface state BondEthernet0 up
vpp# clear hardware
vpp# clear error
vpp# show hardware
             Name                Idx   Link  Hardware
BondEthernet0                      5     up   Slave-Idx: 1 2
 Ethernet address 00:0b:ab:f4:bd:84
 Ethernet Bonding
   carrier up full duplex speed 2000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/0               1    slave GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1               2    slave GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2               3    down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3               4    down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0                             0    down  local0
 local
vpp# show error
  Count                    Node                  Reason
vpp# trace add dpdk-input 50
vpp# show trace
------------------- Start of thread 0 vpp_main -------------------
No packets in trace buffer
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show trace
------------------- Start of thread 0 vpp_main -------------------
No packets in trace buffer

Thanks in advance for any help..

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10174): https://lists.fd.io/g/vpp-dev/message/10174
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to