https://gerrit.fd.io/r/c/vpp/+/27457

It was in that bunch of “catch up cherrypicks” that I was working on last week.

--a

> On 5 Jun 2020, at 23:36, Dave Barach via lists.fd.io 
> <dbarach=cisco....@lists.fd.io> wrote:
> 
> Dear Chris,
> 
> Whew, that just made my weekend a lot happier. 
> 
> I'll look into why the relevant patch didn't make it back into 19.08 - it 
> will now! - unfortunately "stuff happens..."
> 
> Thanks for confirming... Dave
> 
> -----Original Message-----
> From: Christian Hopps <cho...@chopps.org> 
> Sent: Friday, June 5, 2020 5:09 PM
> To: Dave Barach (dbarach) <dbar...@cisco.com>
> Cc: Christian Hopps <cho...@chopps.org>; vpp-dev <vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] Interesting backtrace in 1908
> 
> Bingo.
> 
> In fact in 19.08 the value is left as 0 which defaults to 15. I took it from 
> 20 down to 15, starting successfully until I reached 15 which then hit the 
> problem (both with the arp path and the other).
> 
> Thanks for the help finding this!
> 
> Chris.
> 
>> On Jun 5, 2020, at 4:52 PM, Dave Barach via lists.fd.io 
>> <dbarach=cisco....@lists.fd.io> wrote:
>> 
>> Hmmm. That begins to smell like an undetected stack overflow. To test that 
>> theory: s/18/20/ below: 
>> 
>> /* *INDENT-OFF* */
>> VLIB_REGISTER_NODE (startup_config_node,static) = {
>>   .function = startup_config_process,
>>   .type = VLIB_NODE_TYPE_PROCESS,
>>   .name = "startup-config-process",
>>   .process_log2_n_stack_bytes = 18,
>> };
>> /* *INDENT-ON* */
>> 
>> It's entirely possible that compiling -O0 blows the stack, especially if you 
>> end up 75 miles deep in fib code.
>> 
>> Dave
>> 
>> -----Original Message-----
>> From: Christian Hopps <cho...@chopps.org>
>> Sent: Friday, June 5, 2020 4:28 PM
>> To: Dave Barach (dbarach) <dbar...@cisco.com>
>> Cc: Christian Hopps <cho...@chopps.org>; vpp-dev <vpp-dev@lists.fd.io>
>> Subject: Re: [vpp-dev] Interesting backtrace in 1908
>> 
>> 
>> 
>>>> On Jun 5, 2020, at 2:10 PM, Dave Barach via lists.fd.io 
>>>> <dbarach=cisco....@lists.fd.io> wrote:
>>> 
>>> Step 1 is to make the silly-looking sibling recursion in 
>>> vlib_node_add_next_with_slot(...) disappear. I’m on it...
>>> 
>>> Just to ask, can you repro w/ master/latest?
>> 
>> I will try and do this.
>> 
>> In the meantime I moved the arp configs to later in my startup config (this 
>> is actually built by a test script) and immediately hit another sigsegv in 
>> startup. This one is in infra but is going through my code initialization, 
>> but also rooted in startup config processing... It's also in memcpy code, 
>> which is making me suspicious now.
>> 
>> Again, I've changed "-O2" to "-O0" in the cmake vpp.mk package, when I 
>> change it back to -O2 I do not hit either bug. So I'm now wondering if there 
>> is something wrong with doing this, like do I need to do something else as 
>> well?
>> 
>> What I'm going for is not to have CLIB_DEBUG defined, but still have useful 
>> levels of debugabillity to do RCA on a much (millions of packets) later 
>> problem I have.
>> 
>> modified   build-data/platforms/vpp.mk
>> @@ -35,13 +35,21 @@ vpp_debug_TAG_CFLAGS = -O0 -DCLIB_DEBUG
>> $(vpp_common_cflags)  vpp_debug_TAG_CXXFLAGS = -O0 -DCLIB_DEBUG 
>> $(vpp_common_cflags)  vpp_debug_TAG_LDFLAGS = -O0 -DCLIB_DEBUG 
>> $(vpp_common_cflags)
>> 
>> -vpp_TAG_CFLAGS = -O2 $(vpp_common_cflags) -vpp_TAG_CXXFLAGS = -O2 
>> $(vpp_common_cflags) -vpp_TAG_LDFLAGS = -O2 $(vpp_common_cflags) -pie
>> +vpp_TAG_CFLAGS = -O0 $(vpp_common_cflags) vpp_TAG_CXXFLAGS = -O0
>> +$(vpp_common_cflags) vpp_TAG_LDFLAGS = -O0 $(vpp_common_cflags) -pie
>> 
>> The new backtrace I'm seeing is:
>> 
>> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
>> 0x00007ffff5cbe8b1 in clib_mov16 (dst=0x6d986a570000 <error: Cannot 
>> access memory at address 0x6d986a570000>, src=0x2d8f0700006c2d08 
>> <error: Cannot access memory at address 0x2d8f0700006c2d08>) at 
>> /var/build/vpp/src/vppinfra/memcpy_sse3.h:56
>> 
>> (gdb) bt
>> #0  0x00007ffff5cbe8b1 in clib_mov16 (dst=0x6d986a570000 <error: 
>> Cannot access memory at address 0x6d986a570000>, 
>> src=0x2d8f0700006c2d08 <error: Cannot access memory at address 
>> 0x2d8f0700006c2d08>) at /var/build/vpp/src/vppinfra/memcpy_sse3.h:56
>> #1  0x00007ffff5cbe910 in clib_mov32 (dst=0x7fffb90bea90 "", 
>> src=0x7fffafa01fd0 "iptfs-refill-zpool sa_index %d before %d requested 
>> %d head %d tail %d") at /var/build/vpp/src/vppinfra/memcpy_sse3.h:66
>> #2  0x00007ffff5cbe951 in clib_mov64 (dst=0x7fffb90bea90 "", 
>> src=0x7fffafa01fd0 "iptfs-refill-zpool sa_index %d before %d requested 
>> %d head %d tail %d") at /var/build/vpp/src/vppinfra/memcpy_sse3.h:73
>> #3  0x00007ffff5cbed5a in clib_memcpy_fast (dst=0x7fffb90bea90, 
>> src=0x7fffafa01fd0, n=5) at 
>> /var/build/vpp/src/vppinfra/memcpy_sse3.h:273
>> #4  0x00007ffff5cc5e72 in do_percent (_s=0x7fffb8141a58, 
>> fmt=0x7ffff5dc7734 "%s%c", va=0x7fffb8141bc8) at 
>> /var/build/vpp/src/vppinfra/format.c:341
>> #5  0x00007ffff5cc6564 in va_format (s=0x0, fmt=0x7ffff5dc7734 "%s%c", 
>> va=0x7fffb8141bc8) at /var/build/vpp/src/vppinfra/format.c:404
>> #6  0x00007ffff5cc6810 in format (s=0x0, fmt=0x7ffff5dc7734 "%s%c") at 
>> /var/build/vpp/src/vppinfra/format.c:428
>> #7  0x00007ffff5cace7d in elog_event_type_register (em=0x7ffff656c7a8 
>> <vlib_global_main+936>, t=0x7fffb9058300) at 
>> /var/build/vpp/src/vppinfra/elog.c:173
>> #8  0x00007fffaf9b8ba4 in elog_event_data_inline 
>> (cpu_time=3193258542505306, track=0x7fffb9093f98, type=0x7fffafc08880 
>> <e>, em=0x7ffff656c7a8 <vlib_global_main+936>) at 
>> /var/build/vpp/src/vppinfra/elog.h:310
>> #9  elog_data_inline (track=0x7fffb9093f98, type=0x7fffafc08880 <e>, 
>> em=0x7ffff656c7a8 <vlib_global_main+936>) at
>> /var/build/vpp/src/vppinfra/elog.h:435
>> #10 iptfs_refill_zpool (vm=0x7ffff656c400 <vlib_global_main>, 
>> zpool=0x7fffb915a8c0, sa_index=1, payload_size=1470, put=false, 
>> track=0x7fffb9093f98) at 
>> /var/build/vpp/src/plugins/iptfs/iptfs_zpool.c:134
>> #11 0x00007fffaf9b9d3f in iptfs_zpool_alloc (vm=0x7ffff656c400 
>> <vlib_global_main>, queue_size=768, sa_index=1, payload_size=1470, 
>> put=false, track=0x7fffb9093f98) at 
>> /var/build/vpp/src/plugins/iptfs/iptfs_zpool.c:235
>> #12 0x00007fffaf99d73c in iptfs_tfs_data_init (sa_index=1, 
>> conf=0x7fffb91cd7c0) at 
>> /var/build/vpp/src/plugins/iptfs/ipsec_iptfs.c:347
>> #13 0x00007fffaf9a0a09 in iptfs_add_del_sa (sa_index=1, 
>> tfs_config=0x7fffb91cd7c0, is_add=1 '\001') at 
>> /var/build/vpp/src/plugins/iptfs/ipsec_iptfs.c:822
>> #14 0x00007ffff6fff6a8 in ipsec_sa_add_and_lock (id=3221225472, 
>> spi=1112, proto=IPSEC_PROTOCOL_ESP, crypto_alg=IPSEC_CRYPTO_ALG_NONE, 
>> ck=0x7fffb8144cf0, integ_alg=IPSEC_INTEG_ALG_NONE, ik=0x7fffb8144d80, 
>> flags=(IPSEC_SA_FLAG_USE_ESN | IPSEC_SA_FLAG_IS_TUNNEL), _tfs_type=2 
>> '\002', tfs_config=0x7fffb91cd7c0, tx_table_id=0, salt=0, 
>> tun_src=0x7fffb8145485, tun_dst=0x7fffb8145495, 
>> sa_out_index=0x7fffb87dc384) at 
>> /var/build/vpp/src/vnet/ipsec/ipsec_sa.c:217
>> #15 0x00007ffff6feb7ce in ipsec_add_del_tunnel_if_internal 
>> (vnm=0x7ffff7b47e80 <vnet_main>, args=0x7fffb8145480, 
>> sw_if_index_p=0x0) at /var/build/vpp/src/vnet/ipsec/ipsec_if.c:370
>> #16 0x00007ffff6fea044 in ipsec_add_del_tunnel_if_rpc_callback 
>> (a=0x7fffb8145480) at /var/build/vpp/src/vnet/ipsec/ipsec_if.c:210
>> #17 0x00007ffff7ba47c9 in vl_api_rpc_call_main_thread_inline 
>> (force_rpc=0 '\000', data_length=600, data=0x7fffb8145480 "\001", 
>> fp=0x7ffff6fea017 <ipsec_add_del_tunnel_if_rpc_callback>) at 
>> /var/build/vpp/src/vlibmemory/vlib_api.c:571
>> #18 vl_api_rpc_call_main_thread (fp=0x7ffff6fea017 
>> <ipsec_add_del_tunnel_if_rpc_callback>, data=0x7fffb8145480 "\001", 
>> data_length=600) at /var/build/vpp/src/vlibmemory/vlib_api.c:602
>> #19 0x00007ffff6fea06a in ipsec_add_del_tunnel_if 
>> (args=0x7fffb8145480) at /var/build/vpp/src/vnet/ipsec/ipsec_if.c:216
>> #20 0x00007ffff6fde020 in create_ipsec_tunnel_command_fn 
>> (vm=0x7ffff656c400 <vlib_global_main>, input=0x7fffb8146ed0, 
>> cmd=0x7fffb82e29a8) at /var/build/vpp/src/vnet/ipsec/ipsec_cli.c:857
>> #21 0x00007ffff62385f2 in vlib_cli_dispatch_sub_commands 
>> (vm=0x7ffff656c400 <vlib_global_main>, cm=0x7ffff656c630 
>> <vlib_global_main+560>, input=0x7fffb8146ed0, 
>> parent_command_index=772) at /var/build/vpp/src/vlib/cli.c:649
>> #22 0x00007ffff62383a3 in vlib_cli_dispatch_sub_commands 
>> (vm=0x7ffff656c400 <vlib_global_main>, cm=0x7ffff656c630 
>> <vlib_global_main+560>, input=0x7fffb8146ed0, parent_command_index=10) 
>> at /var/build/vpp/src/vlib/cli.c:609
>> #23 0x00007ffff62383a3 in vlib_cli_dispatch_sub_commands 
>> (vm=0x7ffff656c400 <vlib_global_main>, cm=0x7ffff656c630 
>> <vlib_global_main+560>, input=0x7fffb8146ed0, parent_command_index=0) 
>> at /var/build/vpp/src/vlib/cli.c:609
>> #24 0x00007ffff62391c3 in vlib_cli_input (vm=0x7ffff656c400 
>> <vlib_global_main>, input=0x7fffb8146ed0, function=0x0, 
>> function_arg=0) at /var/build/vpp/src/vlib/cli.c:750
>> #25 0x00007ffff6344666 in startup_config_process (vm=0x7ffff656c400 
>> <vlib_global_main>, rt=0x7fffb813e000, f=0x0) at 
>> /var/build/vpp/src/vlib/unix/main.c:367
>> #26 0x00007ffff6297fa0 in vlib_process_bootstrap (_a=140736274672448)
>> at /var/build/vpp/src/vlib/main.c:1472
>> #27 0x00007ffff5ce23d8 in clib_calljmp () from 
>> target:/var/build/vpp/build-root/install-vpp-native/vpp/lib/libvppinfr
>> a.so.19.08.2
>> #28 0x00007fffb7a8a7e0 in ?? ()
>> #29 0x00007ffff62981ec in vlib_process_startup (f=0x7fffb7f71c74, 
>> p=0x7fffb813e000, vm=0x7fffb8f498c8) at 
>> /var/build/vpp/src/vlib/main.c:1494
>> #30 dispatch_process (vm=0x7fffb87ef9c0, p=0x30, f=0x30, 
>> last_time_stamp=140736297535488) at 
>> /var/build/vpp/src/vlib/main.c:1539
>> #31 0x00007ffff629fa4b in vlib_main_or_worker_loop (is_main=1, 
>> vm=0x7ffff656c400 <vlib_global_main>) at 
>> /var/build/vpp/src/vlib/main.c:1914
>> #32 vlib_main_loop (vm=0x7ffff656c400 <vlib_global_main>) at 
>> /var/build/vpp/src/vlib/main.c:1931
>> #33 0x00007ffff62a6cef in vlib_main (vm=0x7ffff656c400 
>> <vlib_global_main>, input=0x7fffb7a8bfb0) at 
>> /var/build/vpp/src/vlib/main.c:2148
>> #34 0x00007ffff6345ad6 in thread0 (arg=140737326269440) at 
>> /var/build/vpp/src/vlib/unix/main.c:649
>> #35 0x00007ffff5ce23d8 in clib_calljmp () from 
>> target:/var/build/vpp/build-root/install-vpp-native/vpp/lib/libvppinfr
>> a.so.19.08.2
>> #36 0x00007fffffffcf60 in ?? ()
>> #37 0x00007ffff6346b02 in vlib_unix_main (argc=69, 
>> argv=0x7fffffffe4e8) at /var/build/vpp/src/vlib/unix/main.c:719
>> #38 0x000055555555b801 in main (argc=69, argv=0x7fffffffe4e8) at 
>> /var/build/vpp/src/vpp/vnet/main.c:280
>> (gdb) fr 4
>> #4  0x00007ffff5cc5e72 in do_percent (_s=0x7fffb8141a58, fmt=0x7ffff5dc7734 
>> "%s%c", va=0x7fffb8141bc8) at /var/build/vpp/src/vppinfra/format.c:341
>> 341                   vec_add (s, cstring, len);
>> 
>> (gdb) p cstring
>> $13 = 0x7fffafa01fd0 "iptfs-refill-zpool sa_index %d before %d requested %d 
>> head %d tail %d"
>> (gdb) p len
>> $14 = 69
>> (gdb) p len
>> $15 = 69
>> 
>> # I believe the vec_add has just done a _vec_resize b/c of this:
>> (gdb) fr 4
>> #4  0x00007ffff5cc5e72 in do_percent (_s=0x7fffb8141a58, fmt=0x7ffff5dc7734 
>> "%s%c", va=0x7fffb8141bc8) at /var/build/vpp/src/vppinfra/format.c:341
>> 341                   vec_add (s, cstring, len);
>> (gdb) p _s
>> $21 = (u8 **) 0x7fffb8141a58
>> (gdb) p *_s
>> $22 = (u8 *) 0x0
>> (gdb) p s
>> $23 = (u8 *) 0x7fffb90bea90 ""
>> (gdb) p *((vec_header_t *)s - 1)
>> $24 = {len = 69, dlmalloc_header_offset = 0, vector_data = 
>> 0x7fffb90bea90 ""}
>> 
>> So everything should be peachy for clib_memcpy_fast, but something goes 
>> horribly wrong when it gets to clib_mov16.
>> 
>> #1  0x00007ffff5cbe910 in clib_mov32 (dst=0x7fffb90bea90 "", 
>> src=0x7fffafa01fd0 "iptfs-refill-zpool sa_index %d before %d requested %d 
>> head %d tail %d") at /var/build/vpp/src/vppinfra/memcpy_sse3.h:66
>> 66        clib_mov16 ((u8 *) dst + 0 * 16, (const u8 *) src + 0 * 16);
>> (gdb) p dst
>> $27 = (u8 *) 0x7fffb90bea90 ""
>> (gdb) p src
>> $28 = (const u8 *) 0x7fffafa01fd0 "iptfs-refill-zpool sa_index %d before %d 
>> requested %d head %d tail %d"
>> (gdb) down
>> #0  0x00007ffff5cbe8b1 in clib_mov16 (dst=0x6d986a570000 <error: Cannot 
>> access memory at address 0x6d986a570000>, src=0x2d8f0700006c2d08 <error: 
>> Cannot access memory at address 0x2d8f0700006c2d08>) at 
>> /var/build/vpp/src/vppinfra/memcpy_sse3.h:56
>> 56      {
>> 
>> 
>> Thanks,
>> Chris.
>> 
>>> 
>>> Thanks... Dave
>>> 
>>> From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of 
>>> Christian Hopps
>>> Sent: Friday, June 5, 2020 1:29 PM
>>> To: vpp-dev <vpp-dev@lists.fd.io>
>>> Cc: Christian Hopps <cho...@chopps.org>
>>> Subject: [vpp-dev] Interesting backtrace in 1908
>>> 
>>> I'm wondering if maybe this SIGSEGV/backtrace might be related to the other 
>>> recently reported problem with the FIB and barrier code? The workers are at 
>>> the barrier when the SIGSEGV happens, but maybe they aren't when they need 
>>> to be earlier on?
>>> 
>>> In this case I've compiled w/o CLIB_DEBUG set, but with compiler flags set 
>>> to -O0 instead of -O2 (trying to debug another problem that occurs much 
>>> later).
>>> 
>>> This is being hit (apparently) when my startup config is adding a 
>>> static arp entry (included below the backtrace)
>>> 
>>> I've sync'd code to the 2 recent commits past 19.08.02 as well as cherry 
>>> picking the fix from Dave for the counter resize issue in the FIB.
>>> 
>>> I can try and put together a more in depth bug report (or try an RCA it 
>>> myself), but I'm wondering if something might be easily identified from 
>>> this backtrace w/o doing a bunch more work.
>>> 
>>> Thanks,
>>> Chris.
>>> 
>>> (gdb) info thre
>>> Id   Target Id         Frame
>>> * 1    Thread 83.83 "vpp_main" 0x00007ffff5ccbb11 in clib_memcpy_fast 
>>> (dst=0x1400000000000000, src=0x450000000000e239, n=936751150609465344) at 
>>> /var/build/vpp/src/vppinfra/memcpy_sse3.h:187
>>> 2    Thread 83.86 "eal-intr-thread" 0x00007ffff59a3bb7 in epoll_wait 
>>> (epfd=epfd@entry=15, events=events@entry=0x7fff9e8dbe10, 
>>> maxevents=maxevents@entry=1, timeout=timeout@entry=-1) at 
>>> ../sysdeps/unix/sysv/linux/epoll_wait.c:30
>>> 3    Thread 83.87 "vpp_wk_0" 0x00007ffff6290c90 in 
>>> vlib_worker_thread_barrier_check () at /var/build/vpp/src/vlib/threads.h:430
>>> 4    Thread 83.88 "vpp_wk_1" 0x00007ffff6290c9a in 
>>> vlib_worker_thread_barrier_check () at /var/build/vpp/src/vlib/threads.h:430
>>> 5    Thread 83.89 "vpp_wk_2" 0x00007ffff6290c9f in 
>>> vlib_worker_thread_barrier_check () at /var/build/vpp/src/vlib/threads.h:430
>>> 
>>> (gdb) bt
>>> #0  0x00007ffff5ccbb11 in clib_memcpy_fast (dst=0x1400000000000000, 
>>> src=0x450000000000e239, n=936751150609465344) at
>>> /var/build/vpp/src/vppinfra/memcpy_sse3.h:187
>>> #1  0x00007ffff5cd49a8 in lookup (v=0x7fffb7c793c8, key=615, op=SET, 
>>> new_value=0x7fffb7c04880, old_value=0x0) at
>>> /var/build/vpp/src/vppinfra/hash.c:611
>>> #2  0x00007ffff5cd6217 in _hash_set3 (v=0x7fffb7c793c8, key=615, 
>>> value=0x7fffb7c04880, old_value=0x0) at
>>> /var/build/vpp/src/vppinfra/hash.c:840
>>> #3  0x00007ffff62b0b28 in vlib_node_add_next_with_slot
>>> (vm=0x7ffff656c400 <vlib_global_main>, node_index=522, 
>>> next_node_index=615, slot=3) at /var/build/vpp/src/vlib/node.c:241
>>> #4  0x00007ffff62b102b in vlib_node_add_next_with_slot
>>> (vm=0x7ffff656c400 <vlib_global_main>, node_index=521, 
>>> next_node_index=615, slot=3) at /var/build/vpp/src/vlib/node.c:253
>>> #5  0x00007ffff62b102b in vlib_node_add_next_with_slot
>>> (vm=0x7ffff656c400 <vlib_global_main>, node_index=520, 
>>> next_node_index=615, slot=3) at /var/build/vpp/src/vlib/node.c:253
>>> #6  0x00007ffff62b102b in vlib_node_add_next_with_slot
>>> (vm=0x7ffff656c400 <vlib_global_main>, node_index=519, 
>>> next_node_index=615, slot=3) at /var/build/vpp/src/vlib/node.c:253
>>> #7  0x00007ffff62b102b in vlib_node_add_next_with_slot
>>> (vm=0x7ffff656c400 <vlib_global_main>, node_index=274, 
>>> next_node_index=615, slot=3) at /var/build/vpp/src/vlib/node.c:253
>>> #8  0x00007ffff62b102b in vlib_node_add_next_with_slot
>>> (vm=0x7ffff656c400 <vlib_global_main>, node_index=523, 
>>> next_node_index=615, slot=3) at /var/build/vpp/src/vlib/node.c:253
>>> #9  0x00007ffff76e120a in vlib_node_add_next (next_node=615, 
>>> node=523,
>>> vm=0x7ffff656c400 <vlib_global_main>) at
>>> /var/build/vpp/src/vlib/node_funcs.h:1109
>>> #10 adj_nbr_update_rewrite_internal (adj=0x7fffb821e5c0, 
>>> adj_next_index=IP_LOOKUP_NEXT_REWRITE, this_node=523, next_node=615,
>>> rewrite=0x0) at /var/build/vpp/src/vnet/adj/adj_nbr.c:488
>>> #11 0x00007ffff76e0c34 in adj_nbr_update_rewrite (adj_index=2, 
>>> flags=ADJ_NBR_REWRITE_FLAG_COMPLETE, rewrite=0x7fffb89f4d60 "\002") 
>>> at
>>> /var/build/vpp/src/vnet/adj/adj_nbr.c:314
>>> #12 0x00007ffff6f695b3 in arp_mk_complete (ai=2, e=0x7fffb7f116a8) at
>>> /var/build/vpp/src/vnet/ethernet/arp.c:385
>>> #13 0x00007ffff6f696be in arp_mk_complete_walk (ai=2,
>>> ctx=0x7fffb7f116a8) at /var/build/vpp/src/vnet/ethernet/arp.c:430
>>> #14 0x00007ffff76e15c0 in adj_nbr_walk_nh4 (sw_if_index=1, 
>>> addr=0x7fffb7f116ac, cb=0x7ffff6f69696 <arp_mk_complete_walk>,
>>> ctx=0x7fffb7f116a8) at /var/build/vpp/src/vnet/adj/adj_nbr.c:624
>>> #15 0x00007ffff6f6a436 in arp_update_adjacency (vnm=0x7ffff7b47e80 
>>> <vnet_main>, sw_if_index=1, ai=2) at
>>> /var/build/vpp/src/vnet/ethernet/arp.c:540
>>> #16 0x00007ffff6b30f27 in ethernet_update_adjacency
>>> (vnm=0x7ffff7b47e80 <vnet_main>, sw_if_index=1, ai=2) at
>>> /var/build/vpp/src/vnet/ethernet/interface.c:210
>>> #17 0x00007ffff7706ceb in vnet_update_adjacency_for_sw_interface
>>> (vnm=0x7ffff7b47e80 <vnet_main>, sw_if_index=1, ai=2) at
>>> /var/build/vpp/src/vnet/adj/rewrite.c:187
>>> #18 0x00007ffff76e0b18 in adj_nbr_add_or_lock 
>>> (nh_proto=FIB_PROTOCOL_IP4, link_type=VNET_LINK_IP4, 
>>> nh_addr=0x7fffb821f200, sw_if_index=1) at
>>> /var/build/vpp/src/vnet/adj/adj_nbr.c:252
>>> #19 0x00007ffff76bff09 in fib_path_attached_next_hop_get_adj
>>> (path=0x7fffb821f1e8, link=VNET_LINK_IP4) at
>>> /var/build/vpp/src/vnet/fib/fib_path.c:668
>>> #20 0x00007ffff76bff50 in fib_path_attached_next_hop_set
>>> (path=0x7fffb821f1e8) at /var/build/vpp/src/vnet/fib/fib_path.c:682
>>> #21 0x00007ffff76c44f3 in fib_path_resolve (path_index=18) at
>>> /var/build/vpp/src/vnet/fib/fib_path.c:1916
>>> #22 0x00007ffff76bbaec in fib_path_list_resolve
>>> (path_list=0x7fffb821eb58) at
>>> /var/build/vpp/src/vnet/fib/fib_path_list.c:584
>>> #23 0x00007ffff76bc154 in fib_path_list_create
>>> (flags=FIB_PATH_LIST_FLAG_NONE, rpaths=0x7fffb80677b0) at
>>> /var/build/vpp/src/vnet/fib/fib_path_list.c:751
>>> #24 0x00007ffff76b07f1 in fib_entry_src_adj_path_swap 
>>> (src=0x7fffb7eef7d0, entry=0x7fffb87a7310, 
>>> pl_flags=FIB_PATH_LIST_FLAG_NONE, paths=0x7fffb80677b0) at
>>> /var/build/vpp/src/vnet/fib/fib_entry_src_adj.c:110
>>> #25 0x00007ffff76ad83a in fib_entry_src_action_path_swap 
>>> (fib_entry=0x7fffb87a7310, source=FIB_SOURCE_ADJ, 
>>> flags=FIB_ENTRY_FLAG_ATTACHED, rpaths=0x7fffb80677b0) at
>>> /var/build/vpp/src/vnet/fib/fib_entry_src.c:1660
>>> #26 0x00007ffff76a2b3b in fib_entry_create (fib_index=0, 
>>> prefix=0x7fffb7c06c60, source=FIB_SOURCE_ADJ, 
>>> flags=FIB_ENTRY_FLAG_ATTACHED, paths=0x7fffb80677b0) at
>>> /var/build/vpp/src/vnet/fib/fib_entry.c:747
>>> #27 0x00007ffff768921a in fib_table_entry_path_add2 (fib_index=0, 
>>> prefix=0x7fffb7c06c60, source=FIB_SOURCE_ADJ, 
>>> flags=FIB_ENTRY_FLAG_ATTACHED, rpaths=0x7fffb80677b0) at
>>> /var/build/vpp/src/vnet/fib/fib_table.c:599
>>> #28 0x00007ffff768901a in fib_table_entry_path_add (fib_index=0, 
>>> prefix=0x7fffb7c06c60, source=FIB_SOURCE_ADJ, 
>>> flags=FIB_ENTRY_FLAG_ATTACHED, next_hop_proto=DPO_PROTO_IP4, 
>>> next_hop=0x7fffb7c06c64, next_hop_sw_if_index=1, 
>>> next_hop_fib_index=4294967295, next_hop_weight=1, 
>>> next_hop_labels=0x0,
>>> path_flags=FIB_ROUTE_PATH_FLAG_NONE) at
>>> /var/build/vpp/src/vnet/fib/fib_table.c:559
>>> #29 0x00007ffff6f6a592 in arp_adj_fib_add (e=0x7fffb7f116a8,
>>> fib_index=0) at /var/build/vpp/src/vnet/ethernet/arp.c:639
>>> #30 0x00007ffff6f6bf99 in vnet_arp_set_ip4_over_ethernet_internal
>>> (vnm=0x7ffff7b47e80 <vnet_main>, args=0x7fffb7c08040) at
>>> /var/build/vpp/src/vnet/ethernet/arp.c:766
>>> #31 0x00007ffff6f7b672 in set_ip4_over_ethernet_rpc_callback
>>> (a=0x7fffb7c08040) at /var/build/vpp/src/vnet/ethernet/arp.c:2305
>>> #32 0x00007ffff7ba47c9 in vl_api_rpc_call_main_thread_inline
>>> (force_rpc=0 '\000', data_length=20, data=0x7fffb7c08040 "\001", 
>>> fp=0x7ffff6f7b5be <set_ip4_over_ethernet_rpc_callback>) at
>>> /var/build/vpp/src/vlibmemory/vlib_api.c:571
>>> #33 vl_api_rpc_call_main_thread (fp=0x7ffff6f7b5be 
>>> <set_ip4_over_ethernet_rpc_callback>, data=0x7fffb7c08040 "\001",
>>> data_length=20) at /var/build/vpp/src/vlibmemory/vlib_api.c:602
>>> #34 0x00007ffff6f7bff3 in vnet_arp_set_ip4_over_ethernet
>>> (vnm=0x7ffff7b47e80 <vnet_main>, sw_if_index=1, a=0x7fffb7c086ce,
>>> flags=IP_NEIGHBOR_FLAG_STATIC) at
>>> /var/build/vpp/src/vnet/ethernet/arp.c:2393
>>> #35 0x00007ffff6f7d4e0 in ip_arp_add_del_command_fn 
>>> (vm=0x7ffff656c400 <vlib_global_main>, input=0x7fffb7c09ed0, 
>>> cmd=0x7fffb7db2c98) at
>>> /var/build/vpp/src/vnet/ethernet/arp.c:2604
>>> #36 0x00007ffff62385f2 in vlib_cli_dispatch_sub_commands
>>> (vm=0x7ffff656c400 <vlib_global_main>, cm=0x7ffff656c630 
>>> <vlib_global_main+560>, input=0x7fffb7c09ed0,
>>> parent_command_index=812) at /var/build/vpp/src/vlib/cli.c:649
>>> #37 0x00007ffff62383a3 in vlib_cli_dispatch_sub_commands
>>> (vm=0x7ffff656c400 <vlib_global_main>, cm=0x7ffff656c630 
>>> <vlib_global_main+560>, input=0x7fffb7c09ed0, 
>>> parent_command_index=33) at /var/build/vpp/src/vlib/cli.c:609
>>> #38 0x00007ffff62383a3 in vlib_cli_dispatch_sub_commands
>>> (vm=0x7ffff656c400 <vlib_global_main>, cm=0x7ffff656c630 
>>> <vlib_global_main+560>, input=0x7fffb7c09ed0, parent_command_index=0) 
>>> at /var/build/vpp/src/vlib/cli.c:609
>>> #39 0x00007ffff62391c3 in vlib_cli_input (vm=0x7ffff656c400 
>>> <vlib_global_main>, input=0x7fffb7c09ed0, function=0x0,
>>> function_arg=0) at /var/build/vpp/src/vlib/cli.c:750
>>> #40 0x00007ffff6344666 in startup_config_process (vm=0x7ffff656c400 
>>> <vlib_global_main>, rt=0x7fffb7c01000, f=0x0) at
>>> /var/build/vpp/src/vlib/unix/main.c:367
>>> #41 0x00007ffff6297fa0 in vlib_process_bootstrap (_a=140736270031680) 
>>> at /var/build/vpp/src/vlib/main.c:1472
>>> #42 0x00007ffff5ce23d8 in clib_calljmp () from
>>> target:/var/build/vpp/build-root/install-vpp-native/vpp/lib/libvppinf
>>> r
>>> a.so.19.08.2
>>> #43 0x00007fffb761d7e0 in ?? ()
>>> #44 0x00007ffff62981ec in vlib_process_startup (f=0x7fffb7a7f004, 
>>> p=0x7fffb7c01000, vm=0x7fffb850e978) at
>>> /var/build/vpp/src/vlib/main.c:1494
>>> #45 dispatch_process (vm=0x7fffb8237760, p=0x30, f=0x30,
>>> last_time_stamp=140736282631376) at
>>> /var/build/vpp/src/vlib/main.c:1539
>>> #46 0x00007ffff629fa4b in vlib_main_or_worker_loop (is_main=1,
>>> vm=0x7ffff656c400 <vlib_global_main>) at
>>> /var/build/vpp/src/vlib/main.c:1914
>>> #47 vlib_main_loop (vm=0x7ffff656c400 <vlib_global_main>) at
>>> /var/build/vpp/src/vlib/main.c:1931
>>> #48 0x00007ffff62a6cef in vlib_main (vm=0x7ffff656c400 
>>> <vlib_global_main>, input=0x7fffb761efb0) at
>>> /var/build/vpp/src/vlib/main.c:2148
>>> #49 0x00007ffff6345ad6 in thread0 (arg=140737326269440) at
>>> /var/build/vpp/src/vlib/unix/main.c:649
>>> #50 0x00007ffff5ce23d8 in clib_calljmp () from
>>> target:/var/build/vpp/build-root/install-vpp-native/vpp/lib/libvppinf
>>> r
>>> a.so.19.08.2
>>> #51 0x00007fffffffcf60 in ?? ()
>>> #52 0x00007ffff6346b02 in vlib_unix_main (argc=69,
>>> argv=0x7fffffffe4e8) at /var/build/vpp/src/vlib/unix/main.c:719
>>> #53 0x000055555555b801 in main (argc=69, argv=0x7fffffffe4e8) at
>>> /var/build/vpp/src/vpp/vnet/main.c:280
>>> 
>>> 
>>> Config:
>>> 
>>> set interface rx-placement UnknownEthernet0 worker 0 set int state
>>> UnknownEthernet0 up set int ip address UnknownEthernet0 
>>> 11.11.11.11/24
>>> 
>>> set interface rx-placement UnknownEthernet1 worker 1 set int state
>>> UnknownEthernet1 up set int ip address UnknownEthernet1 
>>> 13.13.13.11/24
>>> 
>>> set ip arp UnknownEthernet0 11.11.11.253 02:00:0b:00:00:fd static set 
>>> ip arp UnknownEthernet1 13.13.13.12 02:42:0d:0d:0d:0b static
>>> 
>>> ...no-more-arp so it's not getting passed...
>>> 
>> 
>> 
> 
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16678): https://lists.fd.io/g/vpp-dev/message/16678
Mute This Topic: https://lists.fd.io/mt/74698032/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to