Hi, > -----Original Message----- > From: Slava Ovsiienko <viachesl...@nvidia.com> > Sent: Friday, November 10, 2023 11:50 AM > To: dev@dpdk.org > Cc: Raslan Darawsheh <rasl...@nvidia.com>; Matan Azrad > <ma...@nvidia.com>; Suanming Mou <suanmi...@nvidia.com>; > sta...@dpdk.org > Subject: [PATCH 1/1] net/mlx5: fix inline data length for multisegment packets > > If packet data length exceeds the configured limit for packet > to be inlined in the queue descriptor the driver checks if hardware > requires to do minimal data inline or the VLAN insertion offload is > requested and not supported in hardware (that means we have to do VLAN > insertion in software with inline data). Then driver scans the mbuf > chain to find the minimal segment amount to satisfy the data needed > for minimal inline. > > There was incorrect first segment inline data length calculation > with missing VLAN header being inserted, that could lead to the > segmentation fault in the mbuf chain scanning, for example for > the packets: > > packet: > mbuf0 pkt_len = 288, data_len = 156 > mbuf1 pkt_len = 132, data_len = 132 > > txq->inlen_send = 290 > > The driver was trying to reach the inlen_send inline data length > with missing VLAN header length added and was running out of the > mbuf chain (there were just not enough data in the packet to satisfy > the criteria). > > Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template") > Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first > segments") > Cc: sta...@dpdk.org > > Signed-off-by: Viacheslav Ovsiienko <viachesl...@nvidia.com> > Acked-by: Suanming Mou <suanmi...@nvidia.com>
Patch applied to next-net-mlx, Kindest regards, Raslan Darawsheh