Looks good to me, and we've checked the performance has no impact. Thank you.
Acked-by: Viacheslav Ovsiienko <viachesl...@nvidia.com> > -----Original Message----- > From: dev <dev-boun...@dpdk.org> On Behalf Of Phil Yang > Sent: Tuesday, September 29, 2020 18:23 > To: Raslan Darawsheh <rasl...@nvidia.com>; Matan Azrad > <ma...@nvidia.com>; Shahaf Shuler <shah...@nvidia.com> > Cc: nd <n...@arm.com>; Alexander Kozyrev <akozy...@nvidia.com>; > Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>; dev@dpdk.org; > nd <n...@arm.com> > Subject: Re: [dpdk-dev] [PATCH v4] net/mlx5: relaxed ordering for multi- > packet RQ buffer refcnt > > Hi Raslan, > > It seems that there are no more comments for this patch. > So shall we proceed further? > > Thanks, > Phil Yang > > > -----Original Message----- > > From: Alexander Kozyrev <akozy...@nvidia.com> > > Sent: Thursday, September 10, 2020 9:37 AM > > To: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>; Phil Yang > > <phil.y...@arm.com>; akozy...@mellanox.com; rasl...@mellanox.com; > > dev@dpdk.org > > Cc: Phil Yang <phil.y...@arm.com>; ma...@mellanox.com; Shahaf Shuler > > <shah...@mellanox.com>; viachesl...@mellanox.com; nd > <n...@arm.com>; nd > > <n...@arm.com> > > Subject: RE: [PATCH v4] net/mlx5: relaxed ordering for multi-packet RQ > > buffer refcnt > > > > > <snip> > > > > > > > > > > > Use c11 atomics with RELAXED ordering instead of the rte_atomic > > > > ops which enforce unnecessary barriers on aarch64. > > > > > > > > Signed-off-by: Phil Yang <phil.y...@arm.com> > > > Looks good. > > > > > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com> > > > > Acked-by: Alexander Kozyrev <akozy...@nvidia.com> > > > > > > > > > --- > > > > v4: > > > > Remove the unnecessary ACQUIRE barrier in rx burst path. > > > > (Honnappa) > > > > > > > > v3: > > > > Split from the patchset: > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch > > > > > > > > > > work.dpdk.org%2Fcover%2F68159%2F&data=02%7C01%7Cakozyrev%4 > 0 > > nv > > > idia. > > > > > > > > > > com%7Cf16ba4e8cfb145f5d82008d85529348e%7C43083d15727340c1b7db3 > 9e > > f > > > d9ccc > > > > > > > > > > 17a%7C0%7C0%7C637352982762038088&sdata=0HzTxbzh0Dqk0hZ5PI > gE > > V > > > zieyV% > > > > 2BnLTivsVIFFxXFAtI%3D&reserved=0 > > > > > > > > drivers/net/mlx5/mlx5_rxq.c | 2 +- > > > > drivers/net/mlx5/mlx5_rxtx.c | 16 +++++++++------- > > > > drivers/net/mlx5/mlx5_rxtx.h | 2 +- > > > > 3 files changed, 11 insertions(+), 9 deletions(-) > > > > > > > > diff --git a/drivers/net/mlx5/mlx5_rxq.c > > > > b/drivers/net/mlx5/mlx5_rxq.c index > > > > 79eb8f8..40e0239 100644 > > > > --- a/drivers/net/mlx5/mlx5_rxq.c > > > > +++ b/drivers/net/mlx5/mlx5_rxq.c > > > > @@ -2012,7 +2012,7 @@ mlx5_mprq_buf_init(struct rte_mempool > *mp, > > void > > > > *opaque_arg, > > > > > > > > memset(_m, 0, sizeof(*buf)); > > > > buf->mp = mp; > > > > - rte_atomic16_set(&buf->refcnt, 1); > > > > + __atomic_store_n(&buf->refcnt, 1, __ATOMIC_RELAXED); > > > > for (j = 0; j != strd_n; ++j) { > > > > shinfo = &buf->shinfos[j]; > > > > shinfo->free_cb = mlx5_mprq_buf_free_cb; diff --git > > > > a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c > > > > index 1b71e94..549477b 100644 > > > > --- a/drivers/net/mlx5/mlx5_rxtx.c > > > > +++ b/drivers/net/mlx5/mlx5_rxtx.c > > > > @@ -1626,10 +1626,11 @@ mlx5_mprq_buf_free_cb(void *addr > > > __rte_unused, > > > > void *opaque) { > > > > struct mlx5_mprq_buf *buf = opaque; > > > > > > > > - if (rte_atomic16_read(&buf->refcnt) == 1) { > > > > + if (__atomic_load_n(&buf->refcnt, __ATOMIC_RELAXED) == 1) { > > > > rte_mempool_put(buf->mp, buf); > > > > - } else if (rte_atomic16_add_return(&buf->refcnt, -1) == 0) { > > > > - rte_atomic16_set(&buf->refcnt, 1); > > > > + } else if (unlikely(__atomic_sub_fetch(&buf->refcnt, 1, > > > > + __ATOMIC_RELAXED) == 0)) > > > > { > > > > + __atomic_store_n(&buf->refcnt, 1, __ATOMIC_RELAXED); > > > > rte_mempool_put(buf->mp, buf); > > > > } > > > > } > > > > @@ -1709,7 +1710,8 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct > > > > rte_mbuf **pkts, uint16_t pkts_n) > > > > > > > > if (consumed_strd == strd_n) { > > > > /* Replace WQE only if the buffer is still in > > > > use. */ > > > > - if (rte_atomic16_read(&buf->refcnt) > 1) { > > > > + if (__atomic_load_n(&buf->refcnt, > > > > + __ATOMIC_RELAXED) > 1) { > > > > mprq_buf_replace(rxq, rq_ci & wq_mask, > > > strd_n); > > > > /* Release the old buffer. */ > > > > mlx5_mprq_buf_free(buf); > > > > @@ -1821,9 +1823,9 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct > > > > rte_mbuf **pkts, uint16_t pkts_n) > > > > void *buf_addr; > > > > > > > > /* Increment the refcnt of the whole chunk. */ > > > > - rte_atomic16_add_return(&buf->refcnt, 1); > > > > - MLX5_ASSERT((uint16_t)rte_atomic16_read(&buf- > > > > >refcnt) <= > > > > - strd_n + 1); > > > > + __atomic_add_fetch(&buf->refcnt, 1, > > > > __ATOMIC_RELAXED); > > > > + MLX5_ASSERT(__atomic_load_n(&buf->refcnt, > > > > + __ATOMIC_RELAXED) <= strd_n + 1); > > > > buf_addr = RTE_PTR_SUB(addr, > > > > RTE_PKTMBUF_HEADROOM); > > > > /* > > > > * MLX5 device doesn't use iova but it is > > > > necessary in > > a > > > diff > > > > --git a/drivers/net/mlx5/mlx5_rxtx.h > > > > b/drivers/net/mlx5/mlx5_rxtx.h index c02a007..467f31d 100644 > > > > --- a/drivers/net/mlx5/mlx5_rxtx.h > > > > +++ b/drivers/net/mlx5/mlx5_rxtx.h > > > > @@ -68,7 +68,7 @@ struct rxq_zip { > > > > /* Multi-Packet RQ buffer header. */ struct mlx5_mprq_buf { > > > > struct rte_mempool *mp; > > > > - rte_atomic16_t refcnt; /* Atomically accessed refcnt. */ > > > > + uint16_t refcnt; /* Atomically accessed refcnt. */ > > > > uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first > > > packet. > > > > */ > > > > struct rte_mbuf_ext_shared_info shinfos[]; > > > > /* > > > > -- > > > > 2.7.4