Hi Tuanjie,
VPP 20.01 is no longer supported.
Please upgrade to either stable/2009 (20.09 LTS) or master (21.01).
Thanks,
-daw-
On 12/30/20 3:10 AM, tuanjie,li wrote:
Hello,
we are using VPP 20.01 and vpp-agent in Azure Centos7.5 VM, we have
got a segmentation fault error when we used VPP Agent to set interface
MTU.
There is backtrace,
Program received signal SIGSEGV, Segmentation fault.
malloc_get_numa_socket ()
at
/mnt/resource/vpp-20.01/build-root/build-vpp_debug-native/external/dpdk-19.08/lib/librte_eal/common/malloc_heap.h:20
20 unsigned socket_id = rte_socket_id();
Missing separate debuginfos, use: debuginfo-install
libibverbs-22.4-5.el7.x86_64
(gdb) backtrace
#0 malloc_get_numa_socket ()
at
/mnt/resource/vpp-20.01/build-root/build-vpp_debug-native/external/dpdk-19.08/lib/librte_eal/common/malloc_heap.h:20
#1 0x00007fffacce4a75 in malloc_heap_alloc (type=0x0, size=144,
socket_arg=-1, flags=0, align=64, bound=0, contig=false)
at
/mnt/resource/vpp-20.01/build-root/build-vpp_debug-native/external/dpdk-19.08/lib/librte_eal/common/malloc_heap.c:705
#2 0x00007fffaccd92ca in rte_malloc_socket (type=0x0, size=144,
align=64, socket_arg=-1)
at
/mnt/resource/vpp-20.01/build-root/build-vpp_debug-native/external/dpdk-19.08/lib/librte_eal/common/rte_malloc.c:59
#3 0x00007fffaccd9325 in rte_zmalloc_socket (type=0x0, size=144,
align=64, socket=-1)
at
/mnt/resource/vpp-20.01/build-root/build-vpp_debug-native/external/dpdk-19.08/lib/librte_eal/common/rte_malloc.c:78
#4 0x00007fffaccd9376 in rte_zmalloc (type=0x0, size=144, align=64)
at
/mnt/resource/vpp-20.01/build-root/build-vpp_debug-native/external/dpdk-19.08/lib/librte_eal/common/rte_malloc.c:98
#5 0x00007fffad44bb5e in fs_rx_queue_setup (dev=0x7fffb18eb940
<rte_eth_devices+16512>, rx_queue_id=0, nb_rx_desc=1024,
socket_id=4294967295, rx_conf=0x7fffbc6029f0, mb_pool=0x7fe3000f7ec0)
at
/mnt/resource/vpp-20.01/build-root/build-vpp_debug-native/external/dpdk-19.08/drivers/net/failsafe/failsafe_ops.c:435
#6 0x00007fffaccfd9ab in rte_eth_rx_queue_setup (port_id=1,
rx_queue_id=0, nb_rx_desc=1024, socket_id=4294967295,
rx_conf=0x7fffbc602a68, mp=0x7fe3000f7ec0)
at
/mnt/resource/vpp-20.01/build-root/build-vpp_debug-native/external/dpdk-19.08/lib/librte_ethdev/rte_ethdev.c:1686
#7 0x00007fffb0f76a23 in dpdk_device_setup (xd=0x7fffb43c96c0)
at /mnt/resource/vpp-20.01/src/plugins/dpdk/device/common.c:128
#8 0x00007fffb0facda5 in dpdk_flag_change (vnm=0x7ffff7b6a560
<vnet_main>, hi=0x7fffb41fddd8, flags=2)
at /mnt/resource/vpp-20.01/src/plugins/dpdk/device/init.c:137
#9 0x00007ffff6cd81a9 in ethernet_set_flags (vnm=0x7ffff7b6a560
<vnet_main>, hw_if_index=1, flags=2)
at /mnt/resource/vpp-20.01/src/vnet/ethernet/interface.c:385
#10 0x00007ffff6c70543 in vnet_hw_interface_set_mtu
(vnm=0x7ffff7b6a560 <vnet_main>, hw_if_index=1, mtu=1500)
at /mnt/resource/vpp-20.01/src/vnet/interface.c:721
---Type <return> to continue, or q <return> to quit---
#11 0x00007ffff6c7f60e in vl_api_hw_interface_set_mtu_t_handler
(mp=0x7fffb4429bc0)
at /mnt/resource/vpp-20.01/src/vnet/interface_api.c:150
#12 0x00007ffff7bc8709 in msg_handler_internal (am=0x7ffff7dd9ca0
<api_main>, the_msg=0x7fffb4429bc0, trace_it=1, do_it=1,
free_it=0) at /mnt/resource/vpp-20.01/src/vlibapi/api_shared.c:479
#13 0x00007ffff7bc8f7c in vl_msg_api_socket_handler
(the_msg=0x7fffb4429bc0)
at /mnt/resource/vpp-20.01/src/vlibapi/api_shared.c:732
#14 0x00007ffff7ba7070 in vl_socket_process_api_msg
(uf=0x7fffb42317e8, rp=0x7fffb4325b60, input_v=0x7fffb4429bb0 "")
at /mnt/resource/vpp-20.01/src/vlibmemory/socket_api.c:201
#15 0x00007ffff7bb3d8a in vl_api_clnt_process (vm=0x7ffff66b4e00
<vlib_global_main>, node=0x7fffbc5fa000, f=0x0)
at /mnt/resource/vpp-20.01/src/vlibmemory/vlib_api.c:389
#16 0x00007ffff6413994 in vlib_process_bootstrap (_a=140736348621792)
at /mnt/resource/vpp-20.01/src/vlib/main.c:1468
#17 0x00007ffff587c954 in clib_calljmp ()
from
/mnt/resource/vpp-20.01/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.20.01
#18 0x00007fffbc110bb0 in ?? ()
#19 0x00007ffff6413a9c in vlib_process_startup (vm=0xffffffffffffffff,
p=0xf600000000, f=0x7fffbc5fa000)
at /mnt/resource/vpp-20.01/src/vlib/main.c:1490
My configuration details are below.
NIC Information
[root@vm1 vpp-20.01]# lspci | grep Mellanox
0001:00:02.0 Ethernet controller: Mellanox Technologies
MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
0002:00:02.0 Ethernet controller: Mellanox Technologies
MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
[root@vm1 vpp-20.01]# ethtool -i eth3
driver: mlx4_en
version: 4.0-0
firmware-version: 2.43.7028
expansion-rom-version:
bus-info: 0002:00:02.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
[root@vm1 vpp-20.01]#
[root@vm1 vpp-20.01]# ethtool -i eth1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
vpp config
dpdk {
dev 0002:00:02.0
vdev net_vdev_netvsc0,iface=eth1
}
vpp# show interface
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
FailsafeEthernet1 1 down 9000/0/0/0
local0 0 down 0/0/0/0
when Use VPP agent Set interface IP and MTU get the segmentation fault.
docker exec etcd etcdctl put
/vnf-agent/01000035/config/vpp/v2/interfaces/FailsafeEthernet1
'{"name":"FailsafeEthernet1","type":"DPDK","enabled":true,"ip_addresses":["10.0.4.5/24"],"mtu":"1500"}'
I would really appriciate any advice.
Thanks.
Tuanjie Li.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18454): https://lists.fd.io/g/vpp-dev/message/18454
Mute This Topic: https://lists.fd.io/mt/79308793/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-