Hi, Jiawei

Thank you for the clarification. I missed the point that we have updated elts 
array 
with new allocated mbufs and are not able to retry packet building anymore.
Very good catch, thank you!  Could you, please, add this extra explanation
to the  commit message and send the v2 ?

With best regards, 
Slava

> -----Original Message-----
> From: Jiawei Zhu <17826875...@163.com>
> Sent: Friday, February 26, 2021 18:11
> To: Slava Ovsiienko <viachesl...@nvidia.com>; dev@dpdk.org
> Cc: zhujiawe...@huawei.com; Matan Azrad <ma...@nvidia.com>; Shahaf
> Shuler <shah...@nvidia.com>; sta...@dpdk.org
> Subject: Re: [PATCH] net/mlx5: fix wrong segmented packet in Rx
> 
> Hi, Slava
> 
> Thanks for reading my patch, my issue may not be clear.
> Here I give a possible error.
> - we assume segs_n is 4 and we are receiving 4 segments multi-segment
> packet.
> - we fail to alloc mbuf when receive the 3th segment,so it will free the
> mbufs which packet chain we have built. Here are the 1st and 2nd segment.
> - Rx queue in this stride, the 1st and the 2nd segment are fill the new mbuf
> and there data will be rand, but the 3th and 4th segment are still fill the 
> last
> data. So next if still begin on this stride, it will reveice wrong 
> multi-segment
> packet.
> 
> - So we should discarded this packets and pass this stride. After exit the 
> loop,
> we should align the next consumer index.
> 
> What Do you thinking?
> 
> With best regards
> Jiawei
> 
> On 2021/2/24 9:20 PM, Slava Ovsiienko wrote:
> > Hi, Jiawei
> >
> > Thank you for the patch, but It seems I need some clarifications.
> > As far I understand the issue:
> >
> > - we are in the midst of receiving the multi-segment packet
> > - we have some mbufs allocated and packet chain is partially built
> > - we fail on allocation replenishing mbuf for the segment
> > - we free all the mbuf of the built chain
> > - exit from the rx_burtst loop
> > - rq_ci is expected to be kept pointing to the beginning of the current
> >    stride - it is supposed on next rx_burst() invocation we'll continue
> >    Rx queue handling from the stride where we failed
> > - on loop exit we see the code:
> >     if (unlikely((i == 0) && ((rq_ci >> sges_n) == rxq->rq_ci)))
> >            return 0;
> >     /* Update the consumer index. */
> >     rxq->rq_ci = rq_ci >> sges_n;
> > hence, rq_ci is always shifted by sges_n, all increments happened
> > during failed packet processing are just discarded, it seems no fix is 
> > needed.
> >
> > Did I miss something?
> >
> > With best regards,
> > Slava
> >
> >> -----Original Message-----
> >> From: Jiawei Zhu <17826875...@163.com>
> >> Sent: Monday, February 15, 2021 12:15
> >> To: dev@dpdk.org
> >> Cc: zhujiawe...@huawei.com; Matan Azrad <ma...@nvidia.com>; Shahaf
> >> Shuler <shah...@nvidia.com>; Slava Ovsiienko
> >> <viachesl...@nvidia.com>; Jiawei Zhu <17826875...@163.com>;
> >> sta...@dpdk.org
> >> Subject: [PATCH] net/mlx5: fix wrong segmented packet in Rx
> >>
> >> Fixed issue could occur when Mbuf starvation happens in a middle of
> >> reception of a segmented packet.
> >> In such a situation, after release the segments of that packet, it
> >> does not align consumer index to the next stride.
> >> This would cause receive a wrong segmented packet.
> >>
> >> Fixes: 15a756b63734 ("net/mlx5: fix possible NULL dereference in Rx
> >> path")
> >> Cc: sta...@dpdk.org
> >>
> >> Signed-off-by: Jiawei Zhu <17826875...@163.com>
> >> ---
> >>   drivers/net/mlx5/mlx5_rxtx.c | 3 +++
> >>   1 file changed, 3 insertions(+)
> >>
> >> diff --git a/drivers/net/mlx5/mlx5_rxtx.c
> >> b/drivers/net/mlx5/mlx5_rxtx.c index 2e4b87c..e3ce9fd 100644
> >> --- a/drivers/net/mlx5/mlx5_rxtx.c
> >> +++ b/drivers/net/mlx5/mlx5_rxtx.c
> >> @@ -1480,6 +1480,9 @@ enum mlx5_txcmp_code {
> >>                            rte_mbuf_raw_free(pkt);
> >>                            pkt = rep;
> >>                    }
> >> +                  rq_ci >>= sges_n;
> >> +                  ++rq_ci;
> >> +                  rq_ci <<= sges_n;
> >>                    break;
> >>            }
> >>            if (!pkt) {
> >> --
> >> 1.8.3.1
> >>
> >

Reply via email to