Hi Sontu,

You seem to be using a private fork of VPP 18.10, please check whether you can 
reproduce the crash with VPP 20.05 or master.

Best
ben

> -----Original Message-----
> From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of sontu
> mazumdar
> Sent: vendredi 19 juin 2020 11:43
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP crash @fib_entry_delegate_get during ipv6 address
> delete #vpp
> 
> Hi,
> 
> I am seeing VPP crash during ipv6 address delete, below is the backtrace
> 
> 
> 
> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
> 
> fib_entry_delegate_get (fib_entry=fib_entry@entry=0x80214a9e9af4,
> type=type@entry=FIB_ENTRY_DELEGATE_COVERED)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_delegate.c:51
> 
> 51          return (fib_entry_delegate_find_i(fib_entry, type, NULL));
> 
> (gdb) bt
> 
> #0  fib_entry_delegate_get (fib_entry=fib_entry@entry=0x80214a9e9af4,
> type=type@entry=FIB_ENTRY_DELEGATE_COVERED)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_delegate.c:51
> 
> #1  0x00007fd983d57634 in fib_entry_cover_untrack
> (cover=cover@entry=0x80214a9e9af4, tracked_index=84)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_cover.c:51
> 
> #2  0x00007fd983d57060 in fib_entry_src_adj_deactivate
> (src=src@entry=0x7fd94a5cbe34, fib_entry=fib_entry@entry=0x7fd94a9ea94c)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_src_adj.c:298
> 
> #3  0x00007fd983d57131 in fib_entry_src_adj_cover_change
> (src=0x7fd94a5cbe34, fib_entry=0x7fd94a9ea94c)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_src_adj.c:340
> 
> #4  0x00007fd983d5365d in fib_entry_cover_changed
> (fib_entry_index=fib_entry_index@entry=50)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry.c:1241
> 
> #5  0x00007fd983d57587 in fib_entry_cover_change_one (cover=<optimized
> out>, covered=50, args=<optimized out>)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_cover.c:132
> 
> #6  0x00007fd983d57534 in fib_entry_cover_walk_node_ptr (depend=<optimized
> out>, args=<optimized out>)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_cover.c:80
> 
> #7  0x00007fd983d51deb in fib_node_list_walk (list=<optimized out>,
> fn=fn@entry=0x7fd983d57520 <fib_entry_cover_walk_node_ptr>,
> args=args@entry=0x7fd94585bc80)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_node_list.c:375
> 
> #8  0x00007fd983d576c0 in fib_entry_cover_walk (cover=0x7fd94a9ea82c,
> walk=walk@entry=0x7fd983d57540 <fib_entry_cover_change_one>,
> args=args@entry=0xffffffff)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_cover.c:104
> 
> #9  0x00007fd983d576ea in fib_entry_cover_change_notify
> (cover_index=cover_index@entry=46, covered=covered@entry=4294967295)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_entry_cover.c:158
> 
> #10 0x00007fd983d49ee9 in fib_table_entry_delete_i (fib_index=<optimized
> out>, fib_entry_index=46, prefix=0x7fd94585bd00,
> source=FIB_SOURCE_INTERFACE)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_table.c:837
> 
> #11 0x00007fd983d4afe4 in fib_table_entry_delete (fib_index=<optimized
> out>, prefix=<optimized out>, source=<optimized out>)
> 
>     at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/fib/fib_table.c:872
> 
> #12 0x00007fd983ac152f in ip6_del_interface_routes
> (fib_index=fib_index@entry=1, address_length=address_length@entry=112,
> address=<optimized out>,
> 
>     im=0x7fd9840d7a60 <ip6_main>) at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/ip/ip6_forward.c:133
> 
> #13 0x00007fd983ac3977 in ip6_add_del_interface_address
> (vm=vm@entry=0x7fd983656380 <vlib_global_main>, sw_if_index=9,
> address=address@entry=0x7fd94a35764e,
> 
>     address_length=112, is_del=1) at /usr/src/debug/vpp-18.10-
> 35~g7002cae21_dirty.x86_64/src/vnet/ip/ip6_forward.c:279
> 
> #14 0x00007fd9839d87b2 in vl_api_sw_interface_add_del_address_t_handler
> (mp=0x7fd94a35763c) at /usr/include/bits/byteswap.h:47
> 
> 
> Steps done to reproduce the crash:
> =============================
> 
> The below ipv6 addresses are configured
> 
> 
> 
> VPP: 2001:db8:0:1:10:164:4:34/112
> 
> Peer Router: 2001:db8:0:1:10:164:4:33/112
> 
> 
> After configuring addresses, I did ping to peer which created ip6 neighbor
> and its fib_entry in VPP.
> 
> 
> 
> vpp# show ip6 neighbor
> 
>     Time                       Address                    Flags     Link
> layer                     Interface
> 
>     404.2853           2001:db8:0:1:10:164:4:33             D
> f8:c0:01:18:9a:c0       VirtualFuncEthernet0/7/0.1504
> 
> vpp# show ip6 fib table 1
> 
> nc1, fib_index:1, flow hash:[src dst sport dport proto ] locks:[src:API:2,
> ]
> 
> 2001:db8:0:1:10:164:4:33/128   <<< ipv6 neighbor fib entry
> 
>   unicast-ip6-chain
> 
>   [@0]: dpo-load-balance: [proto:ip6 index:50 buckets:1 uRPF:62
> to:[2:208]]
> 
>     [0] [@5]: ipv6 via 2001:db8:0:1:10:164:4:33
> VirtualFuncEthernet0/7/0.1504: mtu:1500
> f8c001189ac0fa163ec07038810005e086dd
> 
> 2001:db8:0:1:10:164:4:0/112
> 
>   unicast-ip6-chain
> 
>   [@0]: dpo-load-balance: [proto:ip6 index:44 buckets:1 uRPF:57
> to:[1:104]]
> 
>     [0] [@4]: ipv6-glean: VirtualFuncEthernet0/7/0.1504: mtu:9000
> fffffffffffffa163ec07038810005e086dd
> 
> 2001:db8:0:1:10:164:4:34/128
> 
>   unicast-ip6-chain
> 
>   [@0]: dpo-load-balance: [proto:ip6 index:45 buckets:1 uRPF:58
> to:[7:592]]
> 
>     [0] [@2]: dpo-receive: 2001:db8:0:1:10:164:4:34 on
> VirtualFuncEthernet0/7/0.1504
> 
> vpp#
> 
> Now in our side, we changed the ipv6 address same to the Peer router one
> i.e 2001:db8:0:1:10:164:4:33
> 
> 
> 
> vpp# show ip6 fib table 1
> 
> nc1, fib_index:1, flow hash:[src dst sport dport proto ] locks:[src:API:2,
> ]
> 
> 2001:db8:0:1:10:164:4:0/112
> 
>   unicast-ip6-chain
> 
>   [@0]: dpo-load-balance: [proto:ip6 index:46 buckets:1 uRPF:60
> to:[2:144]]
> 
>     [0] [@4]: ipv6-glean: VirtualFuncEthernet0/7/0.1504: mtu:1500
> fffffffffffffa163ec07038810005e086dd
> 
> 2001:db8:0:1:10:164:4:33/128   <<< same fib entry for both ipv6 neighbor
> and ipv6 address prefix
> 
>   unicast-ip6-chain
> 
>   [@0]: dpo-load-balance: [proto:ip6 index:47 buckets:1 uRPF:61 to:[0:0]]
> 
>     [0] [@2]: dpo-receive: 2001:db8:0:1:10:164:4:33 on
> VirtualFuncEthernet0/7/0.1504
> 
> vpp#
> 
> 
> 
> 
> 
> Here the fib_entry created from ipv6 neighbor in VPP and from the ipv6
> address prefix are same.
> Now if we delete the ipv6 address, the reported crash in VPP is seen.
> 
> Eventhough it is negative case of configuring same address as remote peer
> but crash should never happen.
> Can someone please here to fix the crash.
> 
> Regards,
> Sontu

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16775): https://lists.fd.io/g/vpp-dev/message/16775
Mute This Topic: https://lists.fd.io/mt/74976361/21656
Mute #vpp: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to