This should not happen. The only reason would be for the sa to have a corrupted thread_index... What is the value of vnet_buffer (b[0])->ipsec.thread_index in ipsec_handoff()?
Best ben > -----Original Message----- > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Vijay Kumar > Sent: jeudi 16 décembre 2021 15:08 > To: vpp-dev <vpp-dev@lists.fd.io> > Subject: [vpp-dev] Regarding ipsec_handoff > > Hi experts, > > We are seeing a crash on our setup, below is the call stack. We have not > seen this call stack earlier. > > The crash is seen when the packet is enqueued to thread. > > Does anyone know why we are hitting the ipsec_handoff() flow though we > have only one worker in the cluster? > > > vpp# show threads > ID Name Type LWP Sched Policy (Priority) > lcore Core Socket %CPU > 0 vpp_main 88 other (0) 1 > 0 0 0 > 1 vpp_wk_0 workers 95 other (0) 2 > 0 0 0 > vpp# > > > > Crash backtrace > =========== > Thread 3 "vpp_wk_0" received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7f4cce9fe640 (LWP 96)] > vlib_get_frame_queue_elt (index=32592, frame_queue_index=<optimized out>) > at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/threads.h:560 > 560 new_tail = clib_atomic_add_fetch (&fq->tail, 1); > (gdb) bt > #0 vlib_get_frame_queue_elt (index=32592, frame_queue_index=<optimized > out>) at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/threads.h:560 > #1 vlib_get_worker_handoff_queue_elt > (handoff_queue_elt_by_worker_index=0x7f50bee27fb0, > vlib_worker_index=32592, frame_queue_index=<optimized out>) > at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/threads.h:621 > #2 vlib_buffer_enqueue_to_thread_an (drops=0x0, drop_on_congestion=1, > n_packets=1, thread_indices=0x7f50c54c8c40, buffer_indices=0x7f50bdc35390, > frame_queue_index=<optimized out>, > vm=0x7f50c54ca540) at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/buffer_node.h:539 > #3 vlib_buffer_enqueue_to_thread (drop_on_congestion=1, n_packets=1, > thread_indices=0x7f50c54c8c40, buffer_indices=<optimized out>, > frame_queue_index=<optimized out>, vm=0x7f50c54ca540) > at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/buffer_node.h:607 > #4 ipsec_handoff (is_enc=true, fq_index=<optimized out>, > frame=0x7f50bdc35380, node=0x7f50c5718680, vm=<optimized out>) > at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vnet/ipsec/ipsec_handoff.c:174 > #5 esp4_encrypt_tun_handoff_fn_skx (vm=<optimized out>, > node=0x7f50c5718680, from_frame=0x7f50bdc35380) at /usr/src/debug/vpp- > 20.05.1-2~g54d3cdad5_dirty.x86_64/src/vnet/ipsec/ipsec_handoff.c:209 > #6 0x00007f50fda3db49 in dispatch_node (last_time_stamp=<optimized out>, > frame=0x7f50bdc35380, dispatch_state=VLIB_NODE_STATE_POLLING, > type=VLIB_NODE_TYPE_INTERNAL, node=0x7f50c5718680, > vm=0x7f50c54ca540) at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/main.c:1271 > #7 dispatch_pending_node (vm=vm@entry=0x7f50c54ca540, > pending_frame_index=pending_frame_index@entry=16, > last_time_stamp=<optimized out>) > at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/main.c:1460 > #8 0x00007f50fda3f51f in vlib_main_or_worker_loop (is_main=0, > vm=0x7f50c54ca540) at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/main.c:2016 > #9 vlib_worker_loop (vm=0x7f50c54ca540) at /usr/src/debug/vpp-20.05.1- > 2~g54d3cdad5_dirty.x86_64/src/vlib/main.c:2150 > #10 0x00007f50fd927470 in clib_calljmp () from > /lib64/libvppinfra.so.20.05.1 > #11 0x00007f4cce9fdc80 in ?? () > #12 0x00007f50b94deac1 in eal_thread_loop.cold () from > /usr/lib/vpp_plugins/dpdk_plugin.so > #13 0x0000000000000000 in ?? () > (gdb)
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20643): https://lists.fd.io/g/vpp-dev/message/20643 Mute This Topic: https://lists.fd.io/mt/87767338/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-