Hi Gabor,

I will look into it and get back to you. Meanwhile could you run the same
test with a debug build and post the results ? Maybe even core dump. Also
please post your startup.conf file

Best regards,
Filip Varga


st 9. 11. 2022 o 7:50 Gabor LENCSE <len...@hit.bme.hu> napísal(a):

> Dear VPP Developers,
>
> I am a researcher and I would like to benchmark the performance of the
> stateful NAT64 implementation of FD.io VPP.
>
> Unfortunately, VPP crashed with segmentation fault.
>
> Some details:
>
> I used two Dell PowerEdge R430 servers as the Tester and the DUT (Device
> Under Test), two 10GbE interfaces of which were interconnected by direct
> cables. On the DUT, I used Debian Linux 10.13 with 4.19.0-20-amd64 kernel
> and the version of FD.io VPP was 22.06. The following packages were
> installed: libvppinfra, vpp, vpp-plugin-core, vpp-plugin-dpdk.
>
> I used the following commands to set up Stateful NAT64:
>
> root@p109:~/DUT-settings# cat set-vpp
> vppctl set interface state TenGigabitEthernet5/0/0 up
> vppctl set interface state TenGigabitEthernet5/0/1 up
> vppctl set interface ip address TenGigabitEthernet5/0/0 2001:2::1/64
> vppctl set interface ip address TenGigabitEthernet5/0/1 198.19.0.1/24
> vppctl ip route add 2001:2::/64 via 2001:2::1 TenGigabitEthernet5/0/0
> vppctl ip route add 198.19.0.0/24 via 198.19.0.1 TenGigabitEthernet5/0/1
> vppctl set ip neighbor static TenGigabitEthernet5/0/0 2001:2::2
> a0:36:9f:74:73:64
> vppctl set ip neighbor static TenGigabitEthernet5/0/1 198.19.0.2
> a0:36:9f:74:73:66
> vppctl set interface nat64 in TenGigabitEthernet5/0/0
> vppctl set interface nat64 out TenGigabitEthernet5/0/1
> vppctl nat64 add prefix 64:ff9b::/96
> vppctl nat64 add pool address 198.19.0.1
>
> As for VPP, first I used two workers, but then I also tried without
> workers, using only the main core. Unfortunately, VPP crashed in both
> cases, but with somewhat different messages in the syslog. (Previously I
> tested both setups with IPv6 packet forwarding and they worked with an
> excellent performance.)
>
> The error messages in the syslog when I used two workers:
>
> Nov  7 16:32:02 p109 vnet[2479]: received signal SIGSEGV, PC
> 0x7fa86f138168, faulting address 0x4f8
> Nov  7 16:32:02 p109 vnet[2479]: #0  0x00007fa8b2158137 0x7fa8b2158137
> Nov  7 16:32:02 p109 vnet[2479]: #1  0x00007fa8b2086730 0x7fa8b2086730
> Nov  7 16:32:02 p109 vnet[2479]: #2  0x00007fa86f138168 0x7fa86f138168
> Nov  7 16:32:02 p109 vnet[2479]: #3  0x00007fa86f11d228 0x7fa86f11d228
> Nov  7 16:32:02 p109 vnet[2479]: #4  0x00007fa8b20fbe62 0x7fa8b20fbe62
> Nov  7 16:32:02 p109 vnet[2479]: #5  0x00007fa8b20fda4f vlib_worker_loop +
> 0x5ff
> Nov  7 16:32:02 p109 vnet[2479]: #6  0x00007fa8b2135e79
> vlib_worker_thread_fn + 0xa9
> Nov  7 16:32:02 p109 vnet[2479]: #7  0x00007fa8b2135290
> vlib_worker_thread_bootstrap_fn + 0x50
> Nov  7 16:32:02 p109 vnet[2479]: #8  0x00007fa8b207bfa3 start_thread + 0xf3
> Nov  7 16:32:02 p109 vnet[2479]: #9  0x00007fa8b1d75eff clone + 0x3f
> Nov  7 16:32:02 p109 systemd[1]: vpp.service: Main process exited,
> code=killed, status=6/ABRT
>
> The error messages in the syslog when I used only the main core:
>
> Nov  7 16:48:57 p109 vnet[2606]: received signal SIGSEGV, PC
> 0x7fbe1d24a168, faulting address 0x1a8
> Nov  7 16:48:57 p109 vnet[2606]: #0  0x00007fbe6026a137 0x7fbe6026a137
> Nov  7 16:48:57 p109 vnet[2606]: #1  0x00007fbe60198730 0x7fbe60198730
> Nov  7 16:48:57 p109 vnet[2606]: #2  0x00007fbe1d24a168 0x7fbe1d24a168
> Nov  7 16:48:57 p109 vnet[2606]: #3  0x00007fbe1d22f228 0x7fbe1d22f228
> Nov  7 16:48:57 p109 vnet[2606]: #4  0x00007fbe6020de62 0x7fbe6020de62
> Nov  7 16:48:57 p109 vnet[2606]: #5  0x00007fbe602127d1 vlib_main + 0xd41
> Nov  7 16:48:57 p109 vnet[2606]: #6  0x00007fbe6026906a 0x7fbe6026906a
> Nov  7 16:48:57 p109 vnet[2606]: #7  0x00007fbe60169964 0x7fbe60169964
> Nov  7 16:48:57 p109 systemd[1]: vpp.service: Main process exited,
> code=killed, status=6/ABRT
>
> As for the first time I started with quite high load, I suspected that I
> exhausted some sort of resources, so I tried with much lower load, but the
> same thing happened even when I sent only a single packet.
>
> I used siitperf as Tester: https://github.com/lencsegabor/siitperf
>
> And I followed this methodology:
> https://datatracker.ietf.org/doc/html/draft-ietf-bmwg-benchmarking-stateful
>
> Previously my tests were successful with the following stateful NAT64
> implementations:
> - Jool
> - tayga+iptables
> - OpenBSD PF
>
> Could you please help me why VPP crashes, and how I could make it work?
>
> Thank you very much for your help in advance!
>
> Best regards,
>
> Gábor Lencse
>
>
>
> 
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22154): https://lists.fd.io/g/vpp-dev/message/22154
Mute This Topic: https://lists.fd.io/mt/94908130/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to