https://bugs.dpdk.org/show_bug.cgi?id=301
Bug ID: 301 Summary: DPDK driver is crashing for Mellanox-5 NIC card Product: DPDK Version: 19.05 Hardware: x86 OS: Linux Status: CONFIRMED Severity: critical Priority: Normal Component: ethdev Assignee: dev@dpdk.org Reporter: ullas-d.b...@hpe.com Target Milestone: --- Created attachment 44 --> https://bugs.dpdk.org/attachment.cgi?id=44&action=edit Back trace for core dump file Hardware Details: Server : ProLiant DL380 Gen10 (Processor - Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz) OS Version : Red Hat Enterprise Linux Server release 7.6 (Maipo) Kernel version : 3.10.0-957.el7.x86_64 # lspci |grep -i ether 12:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] 12:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] # lspci | grep Mellanox | awk '{print $1}' | xargs -i -r mstvpd {} ID: HPE Eth 10/25Gb 2p 640SFP28 Adptr PN: 817751-001 EC: G-5903 SN: ACA91800ML V0: PCIe GEN3 x8 10/25Gb 15W V2: 5918 V4: B883038282D0 V5: 0G VA: HP:V2=MFG:V3=FW_VER:V4=MAC:V5=PCAR VB: HPE ConnectX-4 Lx SFP28 # ethtool -i ens1f1 driver: mlx5_core version: 4.6-1.0.1 firmware-version: 14.23.8036 (HP_2420110034) expansion-rom-version: bus-info: 0000:12:00.1 supports-statistics: yes supports-test: yes supports-eeprom-access: no supports-register-dump: no supports-priv-flags: yes Problem Description: _________________________ We are running our DPDK app with latest Mellanox NIC card which has 10GBPS of VLAN capacity, while app running in PMD mode for receiving and transmitting RTP packets.After sometime our app crashes and generates core dump file. We have attached a file with back trace for core dump. We also see same crash with dpdk-18.11 version. This issue is not seen when run with Intel I350 NIC Card Crash back trace snippet Program terminated with signal 11, Segmentation fault. #0 rte_atomic16_read (v=0x6666666666666678) at /home/dpdk-stable-18.08.1/x86_64-native-linuxapp-gcc/include/generic/rte_atomic.h:258 258 return v->cnt; Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7.x86_64 libgcc-4.8.5-36.el7.x86_64 libibverbs-46mlnx1-1.46101.x86_64 libmnl-1.0.3-7.el7.x86_64 libnl3-3.2.28-4.el7.x86_64 libpcap-1.5.3-11.el7.x86_64 libstdc++-4.8.5-36.el7.x86_64 numactl-libs-2.0.9-7.el7.x86_64 (gdb) bt #0 rte_atomic16_read (v=0x6666666666666678) at /home/dpdk-stable-18.08.1/x86_64-native-linuxapp-gcc/include/generic/rte_atomic.h:258 #1 rte_mbuf_refcnt_read (m=0x6666666666666666) at /home/dpdk-stable-18.08.1/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:820 #2 rte_pktmbuf_prefree_seg (m=0x6666666666666666) at /home/dpdk-stable-18.08.1/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:1638 #3 mlx5_tx_complete (txq=<optimized out>) at /home/dpdk-stable-18.08.1/drivers/net/mlx5/mlx5_rxtx.h:577 #4 mlx5_tx_burst (dpdk_txq=<optimized out>, pkts=0x7ffd96e5cce0, pkts_n=1) at /home/dpdk-stable-18.08.1/drivers/net/mlx5/mlx5_rxtx.c:505 #5 0x000000000047d04a in rte_eth_tx_burst (queue_id=0, nb_pkts=<optimized out>, tx_pkts=0x7ffd96e5cce0, port_id=<optimized out>) at /usr/local/include/dpdk/rte_ethdev.h:4101 #6 RteSendPacket (m=0x7f31ec5eab40, port=<optimized out>) at /home/dpdk-stable-18.08.1/examples/ocmp-dpdk/rte-iflib.c:105 #7 PMD () at /home/dpdk-stable-18.08.1/examples/ocmp-dpdk/rte-iflib.c:612 #8 0x00000000004d4e73 in rte_eal_mp_remote_launch (f=f@entry=0x47a3a0 <DISPATCHER>, arg=arg@entry=0x0, call_master=call_master@entry=CALL_MASTER) at /home/dpdk-stable-18.08.1/lib/librte_eal/common/eal_common_launch.c:62 #9 0x000000000047e3a4 in RteMainLoop () at /home/dpdk-stable-18.08.1/examples/ocmp-dpdk/rte-iflib.c:630 #10 0x000000000046f95e in main (argc=<optimized out>, argv=<optimized out>) at /home/dpdk-stable-18.08.1/examples/ocmp-dpdk/dpdk-app.cc:377 Occurrence frequency: ________________ Always -- You are receiving this mail because: You are the assignee for the bug.