Error while compiling DPDK with VPP 23.02

2023-04-26 Thread Chinmaya Agarwal
Hi,

While compiling DPDK for VPP v23.02 on Centos 8, we saw below compilation 
error:-

In file included from ../src-dpdk/drivers/common/mlx5/mlx5_common_mr.c:14:
../src-dpdk/drivers/common/mlx5/linux/mlx5_glue.h:15:10: fatal error: 
infiniband/mlx5dv.h: No such file or directory
 #include 
  ^
compilation terminated.
ninja: build stopped: subcommand failed.
Could not rebuild .
make[3]: *** [packages/dpdk.mk:218: 
/opt/vpp/build/external/rpm/tmp/.dpdk.install.ok] Error 255
make[3]: Leaving directory '/opt/vpp/build/external'
error: Bad exit status from /var/tmp/rpm-tmp.whpNsT (%install)


RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.whpNsT (%install)
make[2]: *** [Makefile:113: vpp-ext-deps-23.02-8.x86_64.rpm] Error 1
make[2]: Leaving directory '/opt/vpp/build/external'
make[1]: *** [Makefile:125: install-rpm] Error 2
make[1]: Leaving directory '/opt/vpp/build/external'
make: *** [Makefile:627: install-ext-deps] Error 2

It is not able to find "infiniband/mlx5dv.h".

What could be the reason for this error? Any package that might be missing and 
needs to install or are we missing something here?

Thanks and Regards,
Chinmaya Agarwal.
DISCLAIMER: This electronic message and all of its contents, contains 
information which is privileged, confidential or otherwise protected from 
disclosure. The information contained in this electronic mail transmission is 
intended for use only by the individual or entity to which it is addressed. If 
you are not the intended recipient or may have received this electronic mail 
transmission in error, please notify the sender immediately and delete / 
destroy all copies of this electronic mail transmission without disclosing, 
copying, distributing, forwarding, printing or retaining any part of it. Hughes 
Systique accepts no responsibility for loss or damage arising from the use of 
the information transmitted by this email including damage from virus.


VPP crash observed in DPDK for MLX5 driver

2023-06-05 Thread Chinmaya Agarwal
Hi,

We are using VPP with DPDK and MLX5 driver and we see random VPP crash when the 
traffic is flowing through the interface with below dump.

May 28 21:53:25 j3da1vmstm01 vnet[2280]: received signal SIGSEGV, PC 
0x7f45eab17b5a, faulting address 0x0
May 28 21:53:25 j3da1vmstm01 vnet[2280]: #0  0x7f4671de2f0b 0x7f4671de2f0b
May 28 21:53:25 j3da1vmstm01 vnet[2280]: #1  0x7f467171cc20 0x7f467171cc20
May 28 21:53:25 j3da1vmstm01 vnet[2280]: #2  0x7f45eab17b5a mlx5_rx_burst + 
0xfa
May 28 21:53:25 j3da1vmstm01 vnet[2280]: #3  0x7f45eae3e111 
dpdk_input_node_fn_skx + 0x1d1
May 28 21:53:25 j3da1vmstm01 vnet[2280]: #4  0x7f4671d9367a vlib_main + 
0xb2a
May 28 21:53:25 j3da1vmstm01 vnet[2280]: #5  0x7f4671de1f56 0x7f4671de1f56
May 28 21:53:25 j3da1vmstm01 vnet[2280]: #6  0x7f46712f1358 0x7f46712f1358

Any suggestions as to what could be the reason for this issue?

Thanks and Regards,
Chinmaya Agarwal.
DISCLAIMER: This electronic message and all of its contents, contains 
information which is privileged, confidential or otherwise protected from 
disclosure. The information contained in this electronic mail transmission is 
intended for use only by the individual or entity to which it is addressed. If 
you are not the intended recipient or may have received this electronic mail 
transmission in error, please notify the sender immediately and delete / 
destroy all copies of this electronic mail transmission without disclosing, 
copying, distributing, forwarding, printing or retaining any part of it. Hughes 
Systique accepts no responsibility for loss or damage arising from the use of 
the information transmitted by this email including damage from virus.


Error seen in DPDK's library function on bringing MLX5 interface UP

2022-09-28 Thread Chinmaya Agarwal
host vnet[179586]: dpdk: Interface 
HundredGigabitEthernetb/0/0 error -2: Unknown error -2
Sep 29 01:49:02 localhost vnet[179586]: interface: sw_set_flags_helper: 
dpdk_interface_admin_up_down: Interface start failed
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: port 0 starting device
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: port 0 Rx queues number 
update: 1 -> 1
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: port 0 Tx queue 0 
allocated and configured 1024 WRs
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: Port 0 txq 0 updated 
with 0x7ef300622368.
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: Port 0 
device_attr.max_qp_wr is 32768.
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: Port 0 
device_attr.max_sge is 30.
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_common: 
mr_ctrl(0x7ef30061e744): flushed, cur_gen=0
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_common: Mempool vpp pool 0 
is not registered
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: port 0 Rx queue 0 
freeing 1024 WRs
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: port 0 Rx queue 
allocation failed: No such file or directory
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_net: port 0 Tx queue 0 
freeing WRs
Sep 29 01:49:02 localhost vnet[179586]: dpdk: mlx5_common: freeing B-tree 
0x7ef30061e7f4 with table 0x7ef30061d280

Thanks and Regards,
Chinmaya Agarwal.
DISCLAIMER: This electronic message and all of its contents, contains 
information which is privileged, confidential or otherwise protected from 
disclosure. The information contained in this electronic mail transmission is 
intended for use only by the individual or entity to which it is addressed. If 
you are not the intended recipient or may have received this electronic mail 
transmission in error, please notify the sender immediately and delete / 
destroy all copies of this electronic mail transmission without disclosing, 
copying, distributing, forwarding, printing or retaining any part of it. Hughes 
Systique accepts no responsibility for loss or damage arising from the use of 
the information transmitted by this email including damage from virus.


Issue seen in DPDK behavior for virtio driver

2022-07-25 Thread Chinmaya Agarwal
Hi,

We are running VPP v22.02 and DPDK v21.11.0 on Centos 8 VM. We are facing an 
issue where if we configure an SRv6 policy on VPP with sid list having 4 SIDs 
we don't see packet coming out of the VPP interface and if we configure 5 SIDs, 
again we don't see the packets coming out of the interface plus VPP crashes 
after some time. We analyzed the crash dump, and it seems that the crash is 
happening in DPDK library in the virtio driver code leg. The interface we are 
using has virtio driver associated with it. Below is the DPDK core dump:-

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x7fe67f1cf256 in virtio_update_packet_stats () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
(gdb) bt
#0  0x7fe67f1cf256 in virtio_update_packet_stats () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
#1  0x7fe67f1d6646 in virtio_xmit_pkts () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
#2  0x7fe67f42d60e in rte_eth_tx_burst (nb_pkts=, 
tx_pkts=0x7fe6862afc00, queue_id=,
port_id=) at 
/opt/vpp/external/x86_64/include/rte_ethdev.h:5680
#3  tx_burst_vector_internal (n_left=1, mb=0x7fe6862afc00, xd=, 
vm=)
at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/plugins/dpdk/device/device.c:175
#4  dpdk_device_class_tx_fn_hsw (vm=, node=, 
f=)
at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/plugins/dpdk/device/device.c:435
#5  0x7fe6c66bd802 in dispatch_node (last_time_stamp=, 
frame=,
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL, 
node=0x7fe685e93c00, vm=0x7fe68547a680)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:975
#6  dispatch_pending_node (vm=vm@entry=0x7fe68547a680, 
pending_frame_index=pending_frame_index@entry=10,
last_time_stamp=) at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1134
#7  0x7fe6c66c1ebf in vlib_main_or_worker_loop (is_main=1, vm=)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1600
#8  vlib_main_loop (vm=) at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1728
#9  vlib_main (vm=, vm@entry=0x7fe68547a680, 
input=input@entry=0x7fe675df5fa0)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:2017
#10 0x7fe6c670cc86 in thread0 (arg=140628055271040)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/unix/main.c:671
#11 0x7fe6c5c29388 in clib_calljmp () at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vppinfra/longjmp.S:123
#12 0x7ffcdf319c80 in ?? ()
#13 0x7fe6c670e210 in vlib_unix_main (argc=, argv=)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/unix/main.c:751
#14 0x in ?? ()
#15 0x0001a53c5137 in ?? ()

We repeated the above test with interface having ixgbe driver and we don't see 
this issue and we can see packet coming out of the interface with correct sid 
list.
What could be the possible reason for this issue? Can we modify any parameter 
at dpdk code level as part of debugging this issue?

Also, we tried subscribing to dpdk dev mailing list (using the same email id), 
but our subscription request is still pending. This issue is kind of a blocker 
for us that's why sending it on mail.

Thanks and Regards,
Chinmaya Agarwal.
DISCLAIMER: This electronic message and all of its contents, contains 
information which is privileged, confidential or otherwise protected from 
disclosure. The information contained in this electronic mail transmission is 
intended for use only by the individual or entity to which it is addressed. If 
you are not the intended recipient or may have received this electronic mail 
transmission in error, please notify the sender immediately and delete / 
destroy all copies of this electronic mail transmission without disclosing, 
copying, distributing, forwarding, printing or retaining any part of it. Hughes 
Systique accepts no responsibility for loss or damage arising from the use of 
the information transmitted by this email including damage from virus.


Re: Issue seen in DPDK behavior for virtio driver

2022-07-28 Thread Chinmaya Agarwal
Hi,

Can we have some pointers on how to debug this issue?
Also, is there a way we can increase logging for DPDK plugin.

Thanks and Regards,
Chinmaya Agarwal.

From: Chinmaya Agarwal
Sent: Monday, July 25, 2022 1:55 PM
To: dev@dpdk.org 
Subject: Issue seen in DPDK behavior for virtio driver

Hi,

We are running VPP v22.02 and DPDK v21.11.0 on Centos 8 VM. We are facing an 
issue where if we configure an SRv6 policy on VPP with sid list having 4 SIDs 
we don't see packet coming out of the VPP interface and if we configure 5 SIDs, 
again we don't see the packets coming out of the interface plus VPP crashes 
after some time. We analyzed the crash dump, and it seems that the crash is 
happening in DPDK library in the virtio driver code leg. The interface we are 
using has virtio driver associated with it. Below is the DPDK core dump:-

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x7fe67f1cf256 in virtio_update_packet_stats () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
(gdb) bt
#0  0x7fe67f1cf256 in virtio_update_packet_stats () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
#1  0x7fe67f1d6646 in virtio_xmit_pkts () from 
/usr/lib/vpp_plugins//dpdk_plugin.so
#2  0x7fe67f42d60e in rte_eth_tx_burst (nb_pkts=, 
tx_pkts=0x7fe6862afc00, queue_id=,
port_id=) at 
/opt/vpp/external/x86_64/include/rte_ethdev.h:5680
#3  tx_burst_vector_internal (n_left=1, mb=0x7fe6862afc00, xd=, 
vm=)
at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/plugins/dpdk/device/device.c:175
#4  dpdk_device_class_tx_fn_hsw (vm=, node=, 
f=)
at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/plugins/dpdk/device/device.c:435
#5  0x7fe6c66bd802 in dispatch_node (last_time_stamp=, 
frame=,
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL, 
node=0x7fe685e93c00, vm=0x7fe68547a680)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:975
#6  dispatch_pending_node (vm=vm@entry=0x7fe68547a680, 
pending_frame_index=pending_frame_index@entry=10,
last_time_stamp=) at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1134
#7  0x7fe6c66c1ebf in vlib_main_or_worker_loop (is_main=1, vm=)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1600
#8  vlib_main_loop (vm=) at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:1728
#9  vlib_main (vm=, vm@entry=0x7fe68547a680, 
input=input@entry=0x7fe675df5fa0)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/main.c:2017
#10 0x7fe6c670cc86 in thread0 (arg=140628055271040)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/unix/main.c:671
#11 0x7fe6c5c29388 in clib_calljmp () at 
/usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vppinfra/longjmp.S:123
#12 0x7ffcdf319c80 in ?? ()
#13 0x7fe6c670e210 in vlib_unix_main (argc=, argv=)
at /usr/src/debug/vpp-22.02.0-35~ge3c583654.x86_64/src/vlib/unix/main.c:751
#14 0x in ?? ()
#15 0x0001a53c5137 in ?? ()

We repeated the above test with interface having ixgbe driver and we don't see 
this issue and we can see packet coming out of the interface with correct sid 
list.
What could be the possible reason for this issue? Can we modify any parameter 
at dpdk code level as part of debugging this issue?

Also, we tried subscribing to dpdk dev mailing list (using the same email id), 
but our subscription request is still pending. This issue is kind of a blocker 
for us that's why sending it on mail.

Thanks and Regards,
Chinmaya Agarwal.
DISCLAIMER: This electronic message and all of its contents, contains 
information which is privileged, confidential or otherwise protected from 
disclosure. The information contained in this electronic mail transmission is 
intended for use only by the individual or entity to which it is addressed. If 
you are not the intended recipient or may have received this electronic mail 
transmission in error, please notify the sender immediately and delete / 
destroy all copies of this electronic mail transmission without disclosing, 
copying, distributing, forwarding, printing or retaining any part of it. Hughes 
Systique accepts no responsibility for loss or damage arising from the use of 
the information transmitted by this email including damage from virus.