> From: Slava Ovsiienko [mailto:viachesl...@nvidia.com]
> Sent: Friday, 24 January 2025 11.23
> 
> Acked-by: Viacheslav Ovsiienko <viachesl...@nvidia.com>
> 
> PS. It seems we should consider replacing the rte_mempool_put_bulk()
> with the new wrapper rte_mbuf_fast_free_bulk() in the drivers.

Agree. And the individual drivers should be carefully tested - with 
RTE_ENABLE_ASSERT and RTE_LIBRTE_MBUF_DEBUG enabled - by the developers 
updating their drivers.
I think it will be easier for the driver developers if we merge this patch 
stand-alone first.

> 
> > From: Morten Brørup <m...@smartsharesystems.com>
> > Sent: Tuesday, January 21, 2025 3:40 PM
> >
> > When putting an mbuf back into its mempool, there are certain
> requirements
> > to the mbuf. Specifically, some of its fields must be initialized.
> >
> > These requirements are in fact invariants about free mbufs, held in
> > mempools, and thus also apply when allocating an mbuf from a mempool.
> > With this in mind, the additional assertions in rte_mbuf_raw_free()
> were
> > moved to __rte_mbuf_raw_sanity_check().
> > Furthermore, the assertion regarding pinned external buffer was
> enhanced; it
> > now also asserts that the referenced pinned external buffer has
> refcnt == 1.
> >
> > The description of RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE was updated to
> > include the remaining requirements, which were missing here.
> >
> > And finally:
> > A new rte_mbuf_fast_free_bulk() inline function was added for the
> benefit of
> > ethdev drivers supporting fast release of mbufs.
> > It asserts these requirements and that the mbufs belong to the
> specified
> > mempool, and then calls rte_mempool_put_bulk().
> >
> > For symmetry, a new rte_mbuf_raw_alloc_bulk() inline function was
> also
> > added.
> >
> > Signed-off-by: Morten Brørup <m...@smartsharesystems.com>
> > Acked-by: Dengdui Huang <huangdeng...@huawei.com>
> > ---
> > v2:
> > * Fixed missing inline.
> > v3:
> > * Fixed missing experimental warning. (Stephen)
> > * Added raw alloc bulk function.
> > ---
> >  lib/ethdev/rte_ethdev.h |  6 ++--
> >  lib/mbuf/rte_mbuf.h     | 80
> +++++++++++++++++++++++++++++++++++++++--
> >  2 files changed, 82 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > 1f71cad244..e9267fca79 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1612,8 +1612,10 @@ struct rte_eth_conf {
> >  #define RTE_ETH_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
> >  /**
> >   * Device supports optimization for fast release of mbufs.
> > - * When set application must guarantee that per-queue all mbufs
> comes
> > from
> > - * the same mempool and has refcnt = 1.
> > + * When set application must guarantee that per-queue all mbufs come
> > + from the same mempool,
> > + * are direct, have refcnt=1, next=NULL and nb_segs=1, as done by
> > rte_pktmbuf_prefree_seg().
> > + *
> > + * @see rte_mbuf_fast_free_bulk()
> >   */
> >  #define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
> >  #define RTE_ETH_TX_OFFLOAD_SECURITY         RTE_BIT64(17)
> > diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h index
> > 0d2e0e64b3..1e40e7fcf7 100644
> > --- a/lib/mbuf/rte_mbuf.h
> > +++ b/lib/mbuf/rte_mbuf.h
> > @@ -568,6 +568,10 @@ __rte_mbuf_raw_sanity_check(__rte_unused const
> > struct rte_mbuf *m)
> >     RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
> >     RTE_ASSERT(m->next == NULL);
> >     RTE_ASSERT(m->nb_segs == 1);
> > +   RTE_ASSERT(!RTE_MBUF_CLONED(m));
> > +   RTE_ASSERT(!RTE_MBUF_HAS_EXTBUF(m) ||
> > +                   (RTE_MBUF_HAS_PINNED_EXTBUF(m) &&
> > +                   rte_mbuf_ext_refcnt_read(m->shinfo) == 1));
> >     __rte_mbuf_sanity_check(m, 0);
> >  }
> >
> > @@ -606,6 +610,43 @@ static inline struct rte_mbuf
> > *rte_mbuf_raw_alloc(struct rte_mempool *mp)
> >     return ret.m;
> >  }
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: This API may change, or be removed, without
> prior
> > notice.
> > + *
> > + * Allocate a bulk of uninitialized mbufs from mempool *mp*.
> > + *
> > + * This function can be used by PMDs (especially in RX functions) to
> > + * allocate a bulk of uninitialized mbufs. The driver is responsible
> of
> > + * initializing all the required fields. See rte_pktmbuf_reset().
> > + * For standard needs, prefer rte_pktmbuf_alloc_bulk().
> > + *
> > + * The caller can expect that the following fields of the mbuf
> > +structure
> > + * are initialized: buf_addr, buf_iova, buf_len, refcnt=1,
> nb_segs=1,
> > + * next=NULL, pool, priv_size. The other fields must be initialized
> > + * by the caller.
> > + *
> > + * @param mp
> > + *   The mempool from which mbufs are allocated.
> > + * @param mbufs
> > + *   Array of pointers to mbufs.
> > + * @param count
> > + *   Array size.
> > + * @return
> > + *   - 0: Success.
> > + *   - -ENOENT: Not enough entries in the mempool; no mbufs are
> retrieved.
> > + */
> > +__rte_experimental
> > +static __rte_always_inline int
> > +rte_mbuf_raw_alloc_bulk(struct rte_mempool *mp, struct rte_mbuf
> > +**mbufs, unsigned int count) {
> > +   int rc = rte_mempool_get_bulk(mp, (void **)mbufs, count);
> > +   if (likely(rc == 0))
> > +           for (unsigned int idx = 0; idx < count; idx++)
> > +                   __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > +   return rc;
> > +}
> > +
> >  /**
> >   * Put mbuf back into its original mempool.
> >   *
> > @@ -623,12 +664,47 @@ static inline struct rte_mbuf
> > *rte_mbuf_raw_alloc(struct rte_mempool *mp)  static
> __rte_always_inline
> > void  rte_mbuf_raw_free(struct rte_mbuf *m)  {
> > -   RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
> > -             (!RTE_MBUF_HAS_EXTBUF(m) ||
> > RTE_MBUF_HAS_PINNED_EXTBUF(m)));
> >     __rte_mbuf_raw_sanity_check(m);
> >     rte_mempool_put(m->pool, m);
> >  }
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: This API may change, or be removed, without
> prior
> > notice.
> > + *
> > + * Put a bulk of mbufs allocated from the same mempool back into the
> > mempool.
> > + *
> > + * The caller must ensure that the mbufs come from the specified
> > +mempool,
> > + * are direct and properly reinitialized (refcnt=1, next=NULL,
> > +nb_segs=1), as done by
> > + * rte_pktmbuf_prefree_seg().
> > + *
> > + * This function should be used with care, when optimization is
> > + * required. For standard needs, prefer rte_pktmbuf_free_bulk().
> > + *
> > + * @see RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
> > + *
> > + * @param mp
> > + *   The mempool to which the mbufs belong.
> > + * @param mbufs
> > + *   Array of pointers to packet mbufs.
> > + *   The array must not contain NULL pointers.
> > + * @param count
> > + *   Array size.
> > + */
> > +__rte_experimental
> > +static __rte_always_inline void
> > +rte_mbuf_fast_free_bulk(struct rte_mempool *mp, struct rte_mbuf
> > +**mbufs, unsigned int count) {
> > +   for (unsigned int idx = 0; idx < count; idx++) {
> > +           const struct rte_mbuf *m = mbufs[idx];
> > +           RTE_ASSERT(m != NULL);
> > +           RTE_ASSERT(m->pool == mp);
> > +           __rte_mbuf_raw_sanity_check(m);
> > +   }
> > +
> > +   rte_mempool_put_bulk(mp, (void **)mbufs, count); }
> > +
> >  /**
> >   * The packet mbuf constructor.
> >   *
> > --
> > 2.43.0

Reply via email to