> On Mar 10, 2019, at 12:14 AM, Shahaf Shuler <shah...@mellanox.com> wrote:
> 
> Inlining a packet to WQE that cross the WQ wraparound, i.e. the WQE
> starts on the end of the ring and ends on the beginning, is not
> supported and blocked by the data path logic.
> 
> However, in case of TSO, an extra inline header is required before
> inlining. This inline header is not taken into account when checking if
> there is enough room left for the required inline size.
> On some corner cases were
> (ring_tailroom - inline header) < inline size < ring_tailroom ,
> this can lead to WQE being written outsize of the ring buffer.
> 
> Fixing it by always assuming the worse case that inline of packet will
> require the inline header.
> 
> Fixes: 3f13f8c23a7c ("net/mlx5: support hardware TSO")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Shahaf Shuler <shah...@mellanox.com>
> ---

Acked-by: Yongseok Koh <ys...@mellanox.com>
 
Thanks

> drivers/net/mlx5/mlx5_rxtx.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
> index baa4079c14..38ce0e29a2 100644
> --- a/drivers/net/mlx5/mlx5_rxtx.c
> +++ b/drivers/net/mlx5/mlx5_rxtx.c
> @@ -693,7 +693,8 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, 
> uint16_t pkts_n)
>                                                  RTE_CACHE_LINE_SIZE);
>                       copy_b = (addr_end > addr) ?
>                                RTE_MIN((addr_end - addr), length) : 0;
> -                     if (copy_b && ((end - (uintptr_t)raw) > copy_b)) {
> +                     if (copy_b && ((end - (uintptr_t)raw) >
> +                                    (copy_b + sizeof(inl)))) {
>                               /*
>                                * One Dseg remains in the current WQE.  To
>                                * keep the computation positive, it is
> -- 
> 2.12.0
> 

Reply via email to