Hi,
I am observing a crash while trying to pop out NSH header with encap-none

Client-1 <--- VPP_Switch <---(via VxLAN_gpe_tunnel)<--- (VPP as NSH 
Proxy)<---client-2

NSH Proxy sends packets with NSH header (SPI-185,SI-254) which VPP_Switch needs 
to decapsulate and sends to Client-1. During this flow VPP_Switch crashes while 
trying to access dmac info with Rx_interface as VxLAN-gpe tunnel interface.

My query this how this NSH header can be popped out without any  further 
encapsulation , Am I missing any configuration as set interface mac is not 
allowed for VxLAN-gpe interface ?

Config on VPP_Switch:

create interface memif id 1 master
set interface state memif0/1 up
set interface ip address memif0/1 1.1.2.2/24

create vxlan-gpe tunnel local 1.1.2.2 remote 1.1.2.1 vni 1000 next-nsh

create nsh map nsp 185 nsi 254 mapped-nsp 185 mapped-nsi 254 nsh_action pop 
encap-none 1(tx_intf) 3(Rx_intf)
vpp# show interface
Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter          
Count
local0                            0     down          0/0/0/0
memif0/0                          1      up          9000/0/0/0
memif0/1                          2      up          9000/0/0/0
nsh_tunnel0                       4      up           0/0/0/0
vxlan_gpe_tunnel0                 3      up           0/0/0/0

Crash decode:
DBGvpp#
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff6f034d6 in ethernet_input_inline_dmac_check (hi=0x7fffbab97880, 
dmacs=0x7fffb3523eb0,
dmacs_bad=0x7fffb3523eae "", n_packets=2, ei=0x0, have_sec_dmac=0 '\000')
at /home/satyendra/vpp_src/vpp/src/vnet/ethernet/node.c:697
697       u64 hwaddr = ei->address.as_u64;
(gdb) bt
#0  0x00007ffff6f034d6 in ethernet_input_inline_dmac_check (hi=0x7fffbab97880, 
dmacs=0x7fffb3523eb0,
dmacs_bad=0x7fffb3523eae "", n_packets=2, ei=0x0, have_sec_dmac=0 '\000')
at /home/satyendra/vpp_src/vpp/src/vnet/ethernet/node.c:697
#1  0x00007ffff6ef34d4 in ethernet_input_inline (vm=0x7fffb65ed680, 
node=0x7fffb831e540, from=0x7fffbab96c98,
n_packets=10, variant=ETHERNET_INPUT_VARIANT_ETHERNET) at 
/home/satyendra/vpp_src/vpp/src/vnet/ethernet/node.c:1331
#2  0x00007ffff6ef26be in ethernet_input_node_fn (vm=0x7fffb65ed680, 
node=0x7fffb831e540, frame=0x7fffbab96c80)
at /home/satyendra/vpp_src/vpp/src/vnet/ethernet/node.c:1712
#3  0x00007ffff6c64368 in dispatch_node (vm=0x7fffb65ed680, 
node=0x7fffb831e540, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffbab96c80, 
last_time_stamp=17497321838074365)
at /home/satyendra/vpp_src/vpp/src/vlib/main.c:1024
#4  0x00007ffff6c64d96 in dispatch_pending_node (vm=0x7fffb65ed680, 
pending_frame_index=7,
last_time_stamp=17497321838074365) at 
/home/satyendra/vpp_src/vpp/src/vlib/main.c:1183
#5  0x00007ffff6c5fa73 in vlib_main_or_worker_loop (vm=0x7fffb65ed680, 
is_main=1)
at /home/satyendra/vpp_src/vpp/src/vlib/main.c:1649
#6  0x00007ffff6c61aba in vlib_main_loop (vm=0x7fffb65ed680) at 
/home/satyendra/vpp_src/vpp/src/vlib/main.c:1777
#7  0x00007ffff6c618a2 in vlib_main (vm=0x7fffb65ed680, input=0x7fffb3524fa8)
at /home/satyendra/vpp_src/vpp/src/vlib/main.c:2066
#8  0x00007ffff6ce97de in thread0 (arg=140736253056640) at 
/home/satyendra/vpp_src/vpp/src/vlib/unix/main.c:671
#9  0x00007ffff6a93b98 in clib_calljmp () at 
/home/satyendra/vpp_src/vpp/src/vppinfra/longjmp.S:123
#10 0x00007fffffffc960 in ?? ()
#11 0x00007ffff6ce930e in vlib_unix_main (argc=27, argv=0x7fffffffde78)
at /home/satyendra/vpp_src/vpp/src/vlib/unix/main.c:751
#12 0x0000000000407957 in main (argc=27, argv=0x7fffffffde78) at 
/home/satyendra/vpp_src/vpp/src/vpp/vnet/main.c:336
(gdb)

Any help will be appreciated.

Regards,
Satyendra
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21387): https://lists.fd.io/g/vpp-dev/message/21387
Mute This Topic: https://lists.fd.io/mt/90982846/21656
Mute #nsh:https://lists.fd.io/g/vpp-dev/mutehashtag/nsh
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to