From: Vijay Kumar <vjkumar2...@gmail.com>
Date: Monday, 15 March 2021 at 17:12
To: Neale Ranns <ne...@graphiant.com>
Cc: vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Regarding crash in ARP resolution when mGRE is configured
Hi Neale,

Thank you for the response. I will try to apply the patch shared in the above 
link. I will let you know the results. If the problem persists, I will share 
the reproduction steps.

In the fdio mailing lists, I found the below piece of config for mGRE, could 
you tell me what are the two addresses in RED font that succeed the keywords 
"peer" and "nh"? Does the peer address 2.1.1.3 point to the GRE interface 
address of the peer that connects with VPP and nh address 1.1.1.1 points to the 
GRE tunnel destination address?

Yes. Peer is overlay, nh is underlay.
In that context using a the tunnel shouldn’t have a /32 prefix assigned, since 
there are other peers on the link.

/neale

create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32<http://2.1.1.2/32>
create teib  gre1 peer 2.1.1.3 nh 1.1.1.1


Regards,
Vijay N


On Mon, Mar 15, 2021 at 8:22 PM Neale Ranns 
<ne...@graphiant.com<mailto:ne...@graphiant.com>> wrote:
Hi Vijay,

I don’t know why there is an ‘arp-ipv4’ adjacency on a tunnel interface, that 
shouldn’t ever happen. I tried to re-create your issue but failed, though I did 
find some other problems on the way. They are addressed here:
  https://gerrit.fd.io/r/c/vpp/+/31643

perhaps you could try this out and see if your issue persists. If so please 
give me exact steps to reproduce.

Thanks,
neale

From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> on behalf of Vijay Kumar via 
lists.fd.io<http://lists.fd.io> 
<vjkumar2003=gmail....@lists.fd.io<mailto:gmail....@lists.fd.io>>
Date: Monday, 15 March 2021 at 09:39
To: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Regarding crash in ARP resolution when mGRE is configured
Adding the VPP mGRE config FYI
===========
create gre tunnel src 20.20.99.99 outer-table-id 1 multipoint
set interface state gre0 up
set interface ip addr gre0 2.2.2.2/32<http://2.2.2.2/32>
create teib gre0 peer 2.2.2.1 nh 7.7.7.7 nh-table-id 1

On Mon, Mar 15, 2021 at 2:06 PM Vijay Kumar 
<vjkumar2...@gmail.com<mailto:vjkumar2...@gmail.com>> wrote:
Hi,

I have configured mGRE tunnel on VPP with one peer. What I noticed in GRE 
multipoint case is that the show adjacency is displaying an entry with 
"arp-ipv4" type which was NOT there when I configured only GRE P2P.  Under this 
situation, there is a crash in the main_thread pointing to ARP resolution call 
stack as shown below.

Has someone tested mGRE and faced a crash like this?
Please let me know if something is missing from my side.

LOGS (GRE multipoint case)
=========================
vpp# show gre tunnel
[0] instance 0 src 20.20.99.99 dst 0.0.0.0 fib-idx 1 sw-if-idx 18 payload L3 
multi-point
vpp#
vpp# show teib
[0] gre0: 2.2.2.1 via [1]:20.20.99.99/32<http://20.20.99.99/32>
vpp#
vpp#
vpp# show adj
[@0] ipv4-glean: loop0: mtu:9000 next:1 ffffffffffffdead000000000806
[@1] ipv4-glean: loop1: mtu:9000 next:2 ffffffffffffdead000000010806
[@2] ipv4 via 0.0.0.0 memif0/0: mtu:65535 next:3
[@3] ipv4 via 0.0.0.0 memif0/1: mtu:65535 next:4
[@4] ipv4 via 0.0.0.0 memif0/2: mtu:65535 next:5
[@5] ipv4 via 0.0.0.0 memif128/0: mtu:65535 next:6
[@6] ipv4 via 0.0.0.0 memif128/1: mtu:65535 next:7
[@7] ipv4 via 0.0.0.0 memif128/2: mtu:65535 next:8
[@8] ipv4-glean: VirtualFuncEthernet0/6/0.1556: mtu:9000 next:3 
fffffffffffffa163e9a2c38810006140806
[@9] ipv4 via 0.0.0.0 memif192/0: mtu:65535 next:9
[@10] ipv4 via 0.0.0.0 memif192/1: mtu:65535 next:10
[@11] ipv4 via 0.0.0.0 memif192/2: mtu:65535 next:11
[@12] ipv4 via 0.0.0.0 memif210/0: mtu:65535 next:12
[@13] ipv4 via 0.0.0.0 memif210/1: mtu:65535 next:13
[@14] ipv4 via 0.0.0.0 memif210/2: mtu:65535 next:14
[@15] arp-ipv4: via 7.7.7.7 gre0
vpp#
vpp#

vpp# show ip fib
7.7.7.7/32<http://7.7.7.7/32>
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:79 buckets:1 uRPF:97 to:[0:0]]
    [0] [@3]: arp-ipv4: via 7.7.7.7 gre0


LOGS (GRE P2P case)
====================
vpp#create gre tunnel src 20.20.99.99 dst 20.20.99.215 outer-table-id 1
vpp#set interface ip address gre0 2.2.2.2/32<http://2.2.2.2/32>
vpp#set interface state gre0 up
vpp#ip route add 7.7.7.7/32<http://7.7.7.7/32> via gre0

vpp# show adj
[@0] ipv4-glean: loop0: mtu:9000 next:1 ffffffffffffdead000000000806
[@1] ipv4-glean: loop1: mtu:9000 next:2 ffffffffffffdead000000010806
[@2] ipv4 via 0.0.0.0 memif0/0: mtu:65535 next:3
[@3] ipv4 via 0.0.0.0 memif0/1: mtu:65535 next:4
[@4] ipv4 via 0.0.0.0 memif0/2: mtu:65535 next:5
[@5] ipv4 via 0.0.0.0 memif128/0: mtu:65535 next:6
[@6] ipv4 via 0.0.0.0 memif128/1: mtu:65535 next:7
[@7] ipv4 via 0.0.0.0 memif128/2: mtu:65535 next:8
[@8] ipv4 via 0.0.0.0 memif192/0: mtu:65535 next:9
[@9] ipv4 via 0.0.0.0 memif192/1: mtu:65535 next:10
[@10] ipv4 via 0.0.0.0 memif192/2: mtu:65535 next:11
[@11] ipv4-glean: VirtualFuncEthernet0/7/0.1556: mtu:9000 next:3 
fffffffffffffa163ec2b4f4810006140806
[@12] ipv4 via 0.0.0.0 memif210/0: mtu:65535 next:12
[@13] ipv4 via 0.0.0.0 memif210/1: mtu:65535 next:13
[@14] ipv4 via 0.0.0.0 memif210/2: mtu:65535 next:14
[@15] ipv4 via 0.0.0.0 gre0: mtu:9000 next:15 
4500000000000000fe2fcd6c14146363141463d700000800
  stacked-on entry:77:
    [@3]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: mtu:1500 next:16 
fa163e4b6b42fa163ec2b4f4810006140800
[@16] ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: mtu:1500 next:16 
fa163e4b6b42fa163ec2b4f4810006140800
vpp#
vpp#


Crash call stack during mGRE testing
===========================================
#0  0x00007fc5d0ca3ac1 in pthread_kill () from /lib64/libpthread.so.0
#1  0x00007fc5d12aec98 in outputBacktraceThreads () at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vppinfra/vcrash.c:538
#2  vCrash_handler (signo=signo@entry=11) at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vppinfra/vcrash.c:718
#3  0x00007fc5d1bc681f in unix_signal_handler (signum=11, si=<optimized out>, 
uc=<optimized out>) at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vlib/unix/main.c:193
#4  <signal handler called>
#5  0x00007fc5d2b68a37 in clib_memcpy_fast (n=6, src=0x0, dst=0x1012825e48) at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vppinfra/memcpy_sse3.h:213
#6  mac_address_from_bytes (bytes=0x0, mac=0x1012825e48) at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vnet/ethernet/mac_address.h:95
#7  ip4_neighbor_probe (dst=<synthetic pointer>, src=<synthetic pointer>, 
adj0=0x7fc59ab12900, vnm=0x7fc5d3215180 <vnet_main>, vm=<optimized out>)
    at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vnet/ip-neighbor/ip4_neighbor.h:56
#8  ip4_arp_inline (is_glean=0, frame=<optimized out>, node=0x7fc59ad27640, 
vm=0x7fc5d1de1300 <vlib_global_main>)
    at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vnet/ip-neighbor/ip4_neighbor.c:217
#9  ip4_arp_node_fn (vm=0x7fc5d1de1300 <vlib_global_main>, node=0x7fc59ad27640, 
frame=0x7fc59af26cc0)
    at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vnet/ip-neighbor/ip4_neighbor.c:242
#10 0x00007fc5d1b81c93 in dispatch_node (last_time_stamp=<optimized out>, 
frame=0x7fc59af26cc0, dispatch_state=VLIB_NODE_STATE_POLLING, 
type=VLIB_NODE_TYPE_INTERNAL, node=0x7fc59ad27640,
    vm=0x7fc5d1de1300 <vlib_global_main>) at 
/usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vlib/main.c:1271
#11 dispatch_pending_node (vm=vm@entry=0x7fc5d1de1300 <vlib_global_main>, 
pending_frame_index=pending_frame_index@entry=1, last_time_stamp=<optimized 
out>)
    at /usr/src/debug/vpp-20.05.1-2~g190cc47ed_dirty.x86_64/src/vlib/main.c:1460

Regards,
Vijay Kumar N

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18923): https://lists.fd.io/g/vpp-dev/message/18923
Mute This Topic: https://lists.fd.io/mt/81344488/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to