Hi All,
I am resending the same again as i am not sure the former one had reached
the forum.

Regards,
Sudhir.

On Thu, Jun 17, 2021 at 1:15 PM Sudhir CR via lists.fd.io <sudhir=
rtbrick....@lists.fd.io> wrote:

> Hi All,
> We have been using vpp with our stack for the 6PE solution for some time.
> But when we recently enabled BFD in vpp we are observing an infinite loop
> with the below call stack.
>
> Any help in resolving this issue would be appreciated  .
>
> (gdb) thread apply all bt
>
> Thread 3 (Thread 0x7f6d27bfe700 (LWP 449)):
> #0  0x00007f6dc79d4007 in vlib_worker_thread_barrier_check () at
> /home/supervisor/libvpp/src/vlib/threads.h:438
> #1  0x00007f6dc79ce52e in vlib_main_or_worker_loop (vm=0x7f6da5f9b6c0,
> is_main=0) at /home/supervisor/libvpp/src/vlib/main.c:1788
> #2  0x00007f6dc79cdd47 in vlib_worker_loop (vm=0x7f6da5f9b6c0) at
> /home/supervisor/libvpp/src/vlib/main.c:2008
> #3  0x00007f6dc7a2592a in vlib_worker_thread_fn (arg=0x7f6da3593180) at
> /home/supervisor/libvpp/src/vlib/threads.c:1862
> #4  0x00007f6dc724bc34 in clib_calljmp () at
> /home/supervisor/libvpp/src/vppinfra/longjmp.S:123
> #5  0x00007f6d27bfdec0 in ?? ()
> #6  0x00007f6dc7a1dad3 in vlib_worker_thread_bootstrap_fn
> (arg=0x7f6da3593180) at /home/supervisor/libvpp/src/vlib/threads.c:585
> Backtrace stopped: previous frame inner to this frame (corrupt stack?)
>
> Thread 2 (Thread 0x7f6d283ff700 (LWP 448)):
> #0  0x00007f6dc79d3ffe in vlib_worker_thread_barrier_check () at
> /home/supervisor/libvpp/src/vlib/threads.h:438
> #1  0x00007f6dc79ce52e in vlib_main_or_worker_loop (vm=0x7f6da5f9a200,
> is_main=0) at /home/supervisor/libvpp/src/vlib/main.c:1788
> #2  0x00007f6dc79cdd47 in vlib_worker_loop (vm=0x7f6da5f9a200) at
> /home/supervisor/libvpp/src/vlib/main.c:2008
> #3  0x00007f6dc7a2592a in vlib_worker_thread_fn (arg=0x7f6da3593080) at
> /home/supervisor/libvpp/src/vlib/threads.c:1862
> #4  0x00007f6dc724bc34 in clib_calljmp () at
> /home/supervisor/libvpp/src/vppinfra/longjmp.S:123
> #5  0x00007f6d283feec0 in ?? ()
> #6  0x00007f6dc7a1dad3 in vlib_worker_thread_bootstrap_fn
> (arg=0x7f6da3593080) at /home/supervisor/libvpp/src/vlib/threads.c:585
> Backtrace stopped: previous frame inner to this frame (corrupt stack?)
>
> Thread 1 (Thread 0x7f6dd47c2240 (LWP 226)):
> #0  0x00007f6dc723c2dc in hash_header (v=0x7f6da6870e18) at
> /home/supervisor/libvpp/src/vppinfra/hash.h:113
> #1  0x00007f6dc723d329 in get_pair (v=0x7f6da6870e18, i=55) at
> /home/supervisor/libvpp/src/vppinfra/hash.c:58
> #2  0x00007f6dc723c372 in lookup (v=0x7f6da6870e18, key=140108524924744,
> op=GET, new_value=0x0, old_value=0x0)
>     at /home/supervisor/libvpp/src/vppinfra/hash.c:557
> #3  0x00007f6dc723c261 in _hash_get (v=0x7f6da6870e18,
> key=140108524924744) at /home/supervisor/libvpp/src/vppinfra/hash.c:641
> #4  0x00007f6dc8bbb5f4 in adj_nbr_find (nh_proto=FIB_PROTOCOL_IP4,
> link_type=VNET_LINK_MPLS, nh_addr=0x7f6da6866c30, sw_if_index=8)
>     at /home/supervisor/libvpp/src/vnet/adj/adj_nbr.c:124
> #5  0x00007f6dc8bbb661 in adj_nbr_add_or_lock (nh_proto=FIB_PROTOCOL_IP4,
> link_type=VNET_LINK_MPLS, nh_addr=0x7f6da6866c30, sw_if_index=8)
>     at /home/supervisor/libvpp/src/vnet/adj/adj_nbr.c:243
> #6  0x00007f6dc8b904db in fib_path_attached_next_hop_get_adj
> (path=0x7f6da6866c18, link=VNET_LINK_MPLS, dpo=0x7f6d8edbb168)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_path.c:674
> #7  0x00007f6dc8b8ffb0 in fib_path_contribute_forwarding (path_index=58,
> fct=FIB_FORW_CHAIN_TYPE_MPLS_NON_EOS, dpo=0x7f6d8edbb168)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_path.c:2475
> ---Type <return> to continue, or q <return> to quit---
> #8  0x00007f6dc8b98399 in fib_path_ext_stack (path_ext=0x7f6da42ab220,
> child_fct=FIB_FORW_CHAIN_TYPE_MPLS_NON_EOS,
>     imp_null_fct=FIB_FORW_CHAIN_TYPE_MPLS_NON_EOS, nhs=0x7f6da8718a80) at
> /home/supervisor/libvpp/src/vnet/fib/fib_path_ext.c:241
> #9  0x00007f6dc8b6e293 in fib_entry_src_collect_forwarding (pl_index=50,
> path_index=58, arg=0x7f6d8edbb380)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_entry_src.c:476
> #10 0x00007f6dc8b8926d in fib_path_list_walk (path_list_index=50,
> func=0x7f6dc8b6e100 <fib_entry_src_collect_forwarding>, ctx=0x7f6d8edbb380)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_path_list.c:1408
> #11 0x00007f6dc8b6da44 in fib_entry_src_mk_lb (fib_entry=0x7f6da6868730,
> esrc=0x7f6da75b11c0, fct=FIB_FORW_CHAIN_TYPE_MPLS_NON_EOS,
>     dpo_lb=0x7f6da6868758) at
> /home/supervisor/libvpp/src/vnet/fib/fib_entry_src.c:576
> #12 0x00007f6dc8b6e6d3 in fib_entry_src_action_install
> (fib_entry=0x7f6da6868730, source=FIB_SOURCE_CLI)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_entry_src.c:706
> #13 0x00007f6dc8b6f5ff in fib_entry_src_action_reactivate
> (fib_entry=0x7f6da6868730, source=FIB_SOURCE_CLI)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_entry_src.c:1222
> #14 0x00007f6dc8b6c5c2 in fib_entry_back_walk_notify (node=0x7f6da6868730,
> ctx=0x7f6d8edbb668)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_entry.c:316
> #15 0x00007f6dc8b648c2 in fib_node_back_walk_one (ptr=0x7f6d8edbb688,
> ctx=0x7f6d8edbb668)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_node.c:161
> #16 0x00007f6dc8b4f36a in fib_walk_advance (fwi=1) at
> /home/supervisor/libvpp/src/vnet/fib/fib_walk.c:368
> #17 0x00007f6dc8b4ff00 in* fib_walk_sync 
> *(parent_type=FIB_NODE_TYPE_PATH_LIST,
> parent_index=50, ctx=0x7f6d8edbb828)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_walk.c:792
> #18 0x00007f6dc8b7f896 in fib_path_list_back_walk (path_list_index=50,
> ctx=0x7f6d8edbb828)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_path_list.c:500
> #19 0x00007f6dc8b977b5 in fib_path_back_walk_notify (node=0x7f6da6866c18,
> ctx=0x7f6d8edbb828)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_path.c:1226
> #20 0x00007f6dc8b648c2 in fib_node_back_walk_one (ptr=0x7f6d8edbb848,
> ctx=0x7f6d8edbb828)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_node.c:161
> #21 0x00007f6dc8b4f36a in fib_walk_advance (fwi=0) at
> /home/supervisor/libvpp/src/vnet/fib/fib_walk.c:368
> #22 0x00007f6dc8b4ff00 in *fib_walk_sync* (parent_type=FIB_NODE_TYPE_ADJ,
> parent_index=4, ctx=0x7f6d8edbb8e0)
>     at /home/supervisor/libvpp/src/vnet/fib/fib_walk.c:792
> #23 0x00007f6dc8be9420 in adj_bfd_update_walk (ai=4) at
> /home/supervisor/libvpp/src/vnet/adj/adj_bfd.c:105
> #24 0x00007f6dc8be8917 in adj_bfd_notify (event=BFD_LISTEN_EVENT_UPDATE,
> session=0x7f6da6871e38)
>     at /home/supervisor/libvpp/src/vnet/adj/adj_bfd.c:198
> #25 0x00007f6dc85f77aa in bfd_notify_listeners (bm=0x7f6dc91b7830
> <bfd_main>, event=BFD_LISTEN_EVENT_UPDATE, bs=0x7f6da6871e38)
>     at /home/supervisor/libvpp/src/vnet/bfd/bfd_main.c:450
> #26 0x00007f6dc85ff9ca in bfd_rpc_notify_listeners_cb (a=0x13008a121) at
> /home/supervisor/libvpp/src/vnet/bfd/bfd_main.c:617
> #27 0x00007f6dc9210754 in vl_api_rpc_call_t_handler (mp=0x13008a108) at
> /home/supervisor/libvpp/src/vlibmemory/vlib_api.c:531
> ---Type <return> to continue, or q <return> to quit---
> #28 0x00007f6dc922706f in vl_msg_api_handler_with_vm_node
> (am=0x7f6dc943ad18 <api_global_main>, vlib_rp=0x130027000,
> the_msg=0x13008a108,
>     vm=0x7f6dc7c8dc40 <vlib_global_main>, node=0x7f6da3544300,
> is_private=0 '\000') at /home/supervisor/libvpp/src/vlibapi/api_shared.c:635
> #29 0x00007f6dc91e2c4b in vl_mem_api_handle_rpc (vm=0x7f6dc7c8dc40
> <vlib_global_main>, node=0x7f6da3544300)
>     at /home/supervisor/libvpp/src/vlibmemory/memory_api.c:746
> #30 0x00007f6dc9204797 in vl_api_clnt_process (vm=0x7f6dc7c8dc40
> <vlib_global_main>, node=0x7f6da3544300, f=0x0)
>     at /home/supervisor/libvpp/src/vlibmemory/vlib_api.c:337
> #31 0x00007f6dc79d38ed in vlib_process_bootstrap (_a=140108875679720) at
> /home/supervisor/libvpp/src/vlib/main.c:1464
> #32 0x00007f6dc724bc34 in clib_calljmp () at
> /home/supervisor/libvpp/src/vppinfra/longjmp.S:123
> #33 0x00007f6da3c3c7e0 in ?? ()
> #34 0x00007f6dc79d332f in vlib_process_startup (vm=0x114a3c3c850,
> p=0x2fd93782948b32, f=0x114)
>     at /home/supervisor/libvpp/src/vlib/main.c:1489
>
> *VPP version* : 20.09
>
> *Steps to reproduce* :
> ================
> *Device configuration*
>
> *node 1:*
> set interface ip address loop0 192.0.0.2/24
> set interface ip address loop0 192::2/128
>
> set interface ip address memif32321/32321 23.0.0.2/24
>
> ip route add 192::3/128 table 0 via 23.0.0.3 memif32321/32321
> ip route add 192::3/128 table 2 via 23.0.0.3 memif32321/32321
>
> mpls local-label add 20001 eos via ip6-lookup-in-table 0
> mpls local-label add 2003 non-eos via 23.0.0.3 memif32321/32321
> ip4-lookup-in-table 0 out-labels 3
>
> bfd udp session add interface memif32321/32321 local-addr 23.0.0.2
> peer-addr 23.0.0.3 desired-min-tx 400000 required-min-rx 400000 detect-mult
> 3
>
> *node 2:*
> set interface ip address loop0 192.0.0.3/24
> set interface ip address loop0 192::3/128
>
> set interface ip address memif32321/32321 23.0.0.3/24
>
> ip route add 192::2/128 table 0 via 23.0.0.2 memif32321/32321
> ip route add 192::2/128 table 2 via 23.0.0.2 memif32321/32321
>
>
> mpls local-label add 20001 eos via  ip6-lookup-in-table 0
> mpls local-label add 2002 non-eos via 23.0.0.2 memif32321/32321
> ip4-lookup-in-table 0 out-labels 3
>
>
> *bfd udp session add interface memif32321/32321 local-addr 23.0.0.3
> peer-addr 23.0.0.2 desired-min-tx 400000 required-min-rx 400000 detect-mult
> 3  <-- issue is seen after executing this command*
>
> *FIB tables output on node 2*
>
> ipv4-VRF:0
>  Router Address: 0.0.0.0, fib_index:0, flow hash:[src dst sport dport
> proto ] epoch:0 flags:none locks:[adjacency:1, default-route:1, nat-hi:2, ]
> 0.0.0.0/0
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 0.0.0.0/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 23.0.0.0/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:45 buckets:1 uRPF:45 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 23.0.0.2/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:52 buckets:1 uRPF:48 to:[0:0]
> via:[2:176]]
>     [0] [@5]: ipv4 via 23.0.0.2 memif32321/32321: mtu:9000 next:3
> 7a24276404047a1ce86404040800
> 23.0.0.0/24
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:44 to:[0:0]]
>     [0] [@4]: ipv4-glean: memif32321/32321: mtu:9000 next:2
> ffffffffffff7a1ce86404040806
> 23.0.0.3/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:47 buckets:1 uRPF:49 to:[2:176]]
>     [0] [@2]: dpo-receive: 23.0.0.3 on memif32321/32321
> 23.0.0.255/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:46 buckets:1 uRPF:47 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 192.0.0.0/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:37 buckets:1 uRPF:35 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 192.0.0.0/24
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:36 buckets:1 uRPF:34 to:[0:0]]
>     [0] [@4]: ipv4-glean: loop0: mtu:9000 next:1
> ffffffffffffdead000000000806
> 192.0.0.3/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:39 buckets:1 uRPF:39 to:[0:0]]
>     [0] [@2]: dpo-receive: 192.0.0.3 on loop0
> 192.0.0.255/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:38 buckets:1 uRPF:37 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 224.0.0.0/4
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 240.0.0.0/4
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 255.255.255.255/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> ipv4-VRF:1
>  Router Address: 0.0.0.0, fib_index:2, flow hash:[src dst sport dport
> proto ] epoch:0 flags:none locks:[API:1, ]
>
> 0.0.0.0/0
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:20 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 0.0.0.0/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:21 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 224.0.0.0/4
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:26 buckets:1 uRPF:23 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 240.0.0.0/4
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:22 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 255.255.255.255/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:27 buckets:1 uRPF:24 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
>
> DBGvpp# show ip6 fib
> ipv6-VRF:0
>  Router Address: 0.0.0.0, fib_index:0, flow hash:[src dst sport dport
> proto ] epoch:0 flags:none locks:[recursive-resolution:1, default-route:1, ]
> ::/0
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:6 buckets:1 uRPF:5 to:[0:0]]
>     [0] [@0]: dpo-drop ip6
> 192::2/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:48 buckets:1 uRPF:52 to:[0:0]]
>     [0] [@5]: ipv6 via 23.0.0.2 memif32321/32321: mtu:9000 next:4
> 7a24276404047a1ce864040486dd
> 192::3/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:43 buckets:1 uRPF:43 to:[0:0]]
>     [0] [@2]: dpo-receive: 192::3 on loop0
> fe80::/10
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:7 buckets:1 uRPF:6 to:[0:0]]
>     [0] [@14]: ip6-link-local
> ipv6-VRF:2
>  Router Address: 0.0.0.0, fib_index:2, flow hash:[src dst sport dport
> proto ] epoch:0 flags:none locks:[API:1, ]
> ::/0
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:28 buckets:1 uRPF:25 to:[0:0]]
>     [0] [@0]: dpo-drop ip6
> 192::2/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:49 buckets:1 uRPF:52 to:[0:0]]
>     [0] [@5]: ipv6 via 23.0.0.2 memif32321/32321: mtu:9000 next:4
> 7a24276404047a1ce864040486dd
> fe80::/10
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:29 buckets:1 uRPF:26 to:[0:0]]
>     [0] [@14]: ip6-link-local
>
> DBGvpp#  show mpls fib
> , fib_index:0 locks:[API:1, ]
> ip4-explicit-null:neos/21 fib:0 index:17 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[23] locks:2 flags:exclusive, uPRF-list:17 len:0 itfs:[]
>       path:[23] pl-index:23 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dst-address,unicast lookup in interface's mpls table
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:20 buckets:1 uRPF:17 to:[0:0]]
>     [0] [@4]: dst-address,unicast lookup in interface's mpls table
> ip4-explicit-null:eos/21 fib:0 index:16 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[22] locks:2 flags:exclusive, uPRF-list:16 len:0 itfs:[]
>       path:[22] pl-index:22 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dst-address,unicast lookup in interface's ip4 table
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:19 buckets:1 uRPF:16 to:[0:0]]
>     [0] [@3]: dst-address,unicast lookup in interface's ip4 table
> router-alert:neos/21 fib:0 index:14 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[20] locks:2 flags:exclusive, uPRF-list:14 len:0 itfs:[]
>       path:[20] pl-index:20 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dpo-punt
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:17 buckets:1 uRPF:14 to:[0:0]]
>     [0] [@2]: dpo-punt
> router-alert:eos/21 fib:0 index:15 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[21] locks:2 flags:exclusive, uPRF-list:15 len:0 itfs:[]
>       path:[21] pl-index:21 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dpo-punt
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:18 buckets:1 uRPF:15 to:[0:0]]
>     [0] [@2]: dpo-punt
> ipv6-explicit-null:neos/21 fib:0 index:19 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[25] locks:2 flags:exclusive, uPRF-list:19 len:0 itfs:[]
>       path:[25] pl-index:25 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dst-address,unicast lookup in interface's mpls table
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:22 buckets:1 uRPF:19 to:[0:0]]
>     [0] [@4]: dst-address,unicast lookup in interface's mpls table
> ipv6-explicit-null:eos/21 fib:0 index:18 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[24] locks:2 flags:exclusive, uPRF-list:18 len:0 itfs:[]
>       path:[24] pl-index:24 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dst-address,unicast lookup in interface's ip6 table
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:21 buckets:1 uRPF:18 to:[0:0]]
>     [0] [@5]: dst-address,unicast lookup in interface's ip6 table
> , fib_index:1 locks:[API:1, ]
> ip4-explicit-null:neos/21 fib:1 index:30 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[36] locks:2 flags:exclusive, uPRF-list:30 len:0 itfs:[]
>       path:[36] pl-index:36 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dst-address,unicast lookup in interface's mpls table
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:33 buckets:1 uRPF:30 to:[0:0]]
>     [0] [@4]: dst-address,unicast lookup in interface's mpls table
> ip4-explicit-null:eos/21 fib:1 index:29 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[35] locks:2 flags:exclusive, uPRF-list:29 len:0 itfs:[]
>       path:[35] pl-index:35 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dst-address,unicast lookup in interface's ip4 table
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:29 to:[0:0]]
>     [0] [@3]: dst-address,unicast lookup in interface's ip4 table
> router-alert:neos/21 fib:1 index:27 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[33] locks:2 flags:exclusive, uPRF-list:27 len:0 itfs:[]
>       path:[33] pl-index:33 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dpo-punt
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:30 buckets:1 uRPF:27 to:[0:0]]
>     [0] [@2]: dpo-punt
> router-alert:eos/21 fib:1 index:28 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[34] locks:2 flags:exclusive, uPRF-list:28 len:0 itfs:[]
>       path:[34] pl-index:34 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dpo-punt
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:31 buckets:1 uRPF:28 to:[0:0]]
>     [0] [@2]: dpo-punt
> ipv6-explicit-null:neos/21 fib:1 index:32 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[38] locks:2 flags:exclusive, uPRF-list:32 len:0 itfs:[]
>       path:[38] pl-index:38 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dst-address,unicast lookup in interface's mpls table
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:35 buckets:1 uRPF:32 to:[0:0]]
>     [0] [@4]: dst-address,unicast lookup in interface's mpls table
> ipv6-explicit-null:eos/21 fib:1 index:31 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
>     path-list:[37] locks:2 flags:exclusive, uPRF-list:31 len:0 itfs:[]
>       path:[37] pl-index:37 mpls weight=1 pref=0 exclusive:
>  oper-flags:resolved, cfg-flags:exclusive,
>         [@0]: dst-address,unicast lookup in interface's ip6 table
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:34 buckets:1 uRPF:31 to:[0:0]]
>     [0] [@5]: dst-address,unicast lookup in interface's ip6 table
> 2002:neos/21 fib:1 index:48 locks:2
>   CLI refs:1 src-flags:added,contributing,active,
>     path-list:[50] locks:6 flags:shared, uPRF-list:52 len:1 itfs:[8, ]
>       path:[58] pl-index:50 ip4 weight=1 pref=0 attached-nexthop:
>  oper-flags:resolved,
>         23.0.0.2 memif32321/32321
>       [@0]: ipv4 via 23.0.0.2 memif32321/32321: mtu:9000 next:3
> 7a24276404047a1ce86404040800
>     Extensions:
>      path:58 mpls-flags:[no-ip-tll-decr] labels:[[implicit-null pipe ttl:0
> exp:0]]
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:51 buckets:1 uRPF:52 to:[0:0]]
>     [0] [@8]: mpls via 23.0.0.2 memif32321/32321: mtu:9000 next:2
> 7a24276404047a1ce86404048847
> 20001:eos/21 fib:1 index:47 locks:2
>   CLI refs:1 src-flags:added,contributing,active,
>     path-list:[52] locks:2 flags:shared, uPRF-list:50 len:0 itfs:[]
>       path:[60] pl-index:52 ip6 weight=1 pref=0 deag:  oper-flags:resolved,
>          fib-index:0
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:50 buckets:1 uRPF:50 to:[0:0]]
>     [0] [@6]: mpls-disposition:[0]:[rpf-id:-1 ip6, pipe]
>         [@5]: dst-address,unicast lookup in ipv6-VRF:0
>  Router Address: 0.0.0.0
>
> Thanks and Regards,
> Sudhir
>
>
>
> NOTICE TO RECIPIENT This e-mail message and any attachments are
> confidential and may be privileged. If you received this e-mail in error,
> any review, use, dissemination, distribution, or copying of this e-mail is
> strictly prohibited. Please notify us immediately of the error by return
> e-mail and please delete this message from your system. For more
> information about Rtbrick, please visit us at www.rtbrick.com
>
> 
>
>

-- 
NOTICE TO
RECIPIENT This e-mail message and any attachments are 
confidential and may be
privileged. If you received this e-mail in error, 
any review, use,
dissemination, distribution, or copying of this e-mail is 
strictly
prohibited. Please notify us immediately of the error by return 
e-mail and
please delete this message from your system. For more 
information about Rtbrick, please visit us at www.rtbrick.com 
<http://www.rtbrick.com>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19590): https://lists.fd.io/g/vpp-dev/message/19590
Mute This Topic: https://lists.fd.io/mt/83599549/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to