Thanks Benoit.
I found the root cause for this in my plugin. I was using the handoff
functions incorrectly.

There is no problem in VPP. After fixing my plugin, the usecase runs as
solid in 22.06 as in 21.06.

Regards
-Prashant


On Wed, 12 Oct 2022, 18:26 Benoit Ganne (bganne) via lists.fd.io, <bganne=
cisco....@lists.fd.io> wrote:

> I did not heard anything like this.
> Can you try to reproduce with latest master? Do you have some proprietary
> plugins loaded also?
>
> Best
> ben
>
> > -----Original Message-----
> > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Prashant
> > Upadhyaya
> > Sent: Wednesday, October 12, 2022 11:32
> > To: vpp-dev <vpp-dev@lists.fd.io>
> > Subject: [vpp-dev] Crash in VPP22.06 in ip4_mtrie_16_lookup_step
> >
> > Hi,
> >
> > I am migrating from VPP21.06 where my usecase works without issues
> > overnight, but in VPP22.06 it gives the following crash in 7 to 8
> > minutes of run.
> > Just wondering if this is a known issue or if anybody else has seen this.
> > Further, when I run in VPP22.06 with a single worker thread, this
> > crash is not seen, but when I run with 2 worker threads, then this
> > crash is seen as below.
> > With VPP21.06 the crash is not seen regardless of the number of worker
> > threads.
> >
> > Thread 5 "vpp_wk_1" received signal SIGSEGV, Segmentation fault.
> > [Switching to Thread 0x7fff69156700 (LWP 5408)]
> > 0x00007ffff78327f8 in ip4_mtrie_16_lookup_step
> > (dst_address_byte_index=2, dst_address=0x1025d44ea4,
> > current_leaf=1915695212)
> >     at /home/centos/vpp/src/vnet/ip/ip4_mtrie.h:215
> > 215           return (ply->leaves[dst_address-
> > >as_u8[dst_address_byte_index]]);
> > (gdb) bt
> > #0  0x00007ffff78327f8 in ip4_mtrie_16_lookup_step
> > (dst_address_byte_index=2, dst_address=0x1025d44ea4,
> > current_leaf=1915695212)
> >     at /home/centos/vpp/src/vnet/ip/ip4_mtrie.h:215
> > #1  ip4_fib_forwarding_lookup (addr=0x1025d44ea4, fib_index=1) at
> > /home/centos/vpp/src/vnet/fib/ip4_fib.h:146
> > #2  ip4_lookup_inline (frame=0x7fffbc805a80, node=<optimized out>,
> > vm=0x7fffbc72bc40) at /home/centos/vpp/src/vnet/ip/ip4_forward.h:327
> > #3  ip4_lookup_node_fn_skx (vm=0x7fffbc72bc40, node=0x7fffbc7bc400,
> > frame=0x7fffbc805a80) at
> > /home/centos/vpp/src/vnet/ip/ip4_forward.c:101
> > #4  0x00007ffff7ea2a45 in dispatch_node (last_time_stamp=<optimized
> > out>, frame=0x7fffbc805a80, dispatch_state=VLIB_NODE_STATE_POLLING,
> >     type=VLIB_NODE_TYPE_INTERNAL, node=0x7fffbc7bc400,
> > vm=0x7fffbc7bc400) at /home/centos/vpp/src/vlib/main.c:961
> > #5  dispatch_pending_node (vm=vm@entry=0x7fffbc72bc40,
> > pending_frame_index=pending_frame_index@entry=6,
> > last_time_stamp=<optimized out>)
> >     at /home/centos/vpp/src/vlib/main.c:1120
> > #6  0x00007ffff7ea4639 in vlib_main_or_worker_loop (is_main=0,
> > vm=0x7fffbc72bc40, vm@entry=0x7fffb8c96700)
> >     at /home/centos/vpp/src/vlib/main.c:1589
> > #7  vlib_worker_loop (vm=vm@entry=0x7fffbc72bc40) at
> > /home/centos/vpp/src/vlib/main.c:1723
> > #8  0x00007ffff7edea81 in vlib_worker_thread_fn (arg=0x7fffb8cd5640)
> > at /home/centos/vpp/src/vlib/threads.c:1579
> > #9  0x00007ffff7edde33 in vlib_worker_thread_bootstrap_fn
> > (arg=0x7fffb8cd5640) at /home/centos/vpp/src/vlib/threads.c:418
> > #10 0x00007ffff6638ea5 in start_thread (arg=0x7fff69156700) at
> > pthread_create.c:307
> > #11 0x00007ffff5db5b0d in clone () at
> > ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
> >
> > Regards
> > -Prashant
>
> 
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22026): https://lists.fd.io/g/vpp-dev/message/22026
Mute This Topic: https://lists.fd.io/mt/94277299/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to