On Mon, Mar 30, 2015 at 10:04:20PM +0300, Vadim Suraev wrote: > Hi, Neil > > >I think what you need to do here is enhance the underlying pktmbuf > interface > >such that an rte_mbuf structure has a destructor method association with it > >which is called when its refcnt reaches zero. That way the > >rte_pktmbuf_bulk_free function can just decrement the refcnt on each > >mbuf_structure, and the pool as a whole can be returned when the destructor > >function discovers that all mbufs in that bulk pool are freed. > > I thought again and it looks to me that if mempool_cache is enabled, > rte_pktmbuf_bulk_free and are redundant because the logic would be very > similar to already implemented in rte_mempool. Probably the only > rte_pktmbuf_alloc_bulk makes sense in this patch? > > Regards, > Vadim. > Looking at it, yes, I agree, using an externally allocated large contiguous block of memory, mapped with rte_mempool_xmem_create, then allocating with rte_pktmbuf_alloc would likely work in exactly the same way. I'd argue that even the bulk alloc function isn't really needed, as its implementation seems like it would just be a for loop with 2-3 lines in it.
Neil > On Wed, Mar 18, 2015 at 10:58 PM, Neil Horman <nhorman at tuxdriver.com> > wrote: > > > On Wed, Mar 18, 2015 at 10:21:18PM +0200, vadim.suraev at gmail.com wrote: > > > From: "vadim.suraev at gmail.com" <vadim.suraev at gmail.com> > > > > > > This patch adds mbuf bulk allocation/freeing functions and unittest > > > > > > Signed-off-by: Vadim Suraev > > > <vadim.suraev at gmail.com> > > > --- > > > New in v2: > > > - function rte_pktmbuf_alloc_bulk added > > > - function rte_pktmbuf_bulk_free added > > > - function rte_pktmbuf_free_chain added > > > - applied reviewers' comments > > > > > > app/test/test_mbuf.c | 94 > > +++++++++++++++++++++++++++++++++++++++++++- > > > lib/librte_mbuf/rte_mbuf.h | 91 > > ++++++++++++++++++++++++++++++++++++++++++ > > > 2 files changed, 184 insertions(+), 1 deletion(-) > > > > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c > > > index 1ff66cb..b20c6a4 100644 > > > --- a/app/test/test_mbuf.c > > > +++ b/app/test/test_mbuf.c > > > @@ -77,6 +77,7 @@ > > > #define REFCNT_RING_SIZE (REFCNT_MBUF_NUM * REFCNT_MAX_REF) > > > > > > #define MAKE_STRING(x) # x > > > +#define MBUF_POOL_LOCAL_CACHE_SIZE 32 > > > > > > static struct rte_mempool *pktmbuf_pool = NULL; > > > > > > @@ -405,6 +406,84 @@ test_pktmbuf_pool(void) > > > return ret; > > > } > > > > > ><snip> > > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > > index 17ba791..fabeae2 100644 > > > --- a/lib/librte_mbuf/rte_mbuf.h > > > +++ b/lib/librte_mbuf/rte_mbuf.h > > > @@ -825,6 +825,97 @@ static inline void rte_pktmbuf_free(struct rte_mbuf > > *m) > > > } > > > > > > /** > > > + * Allocate a bulk of mbufs, initiate refcnt and resets > > > + * > > > + * @param pool > > > + * memory pool to allocate from > > > + * @param mbufs > > > + * Array of pointers to mbuf > > > + * @param count > > > + * Array size > > > + */ > > > +static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool, > > > + struct rte_mbuf **mbufs, > > > + unsigned count) > > > +{ > > > + unsigned idx; > > > + int rc = 0; > > > + > > > + rc = rte_mempool_get_bulk(pool, (void **)mbufs, count); > > > + if (unlikely(rc)) > > > + return rc; > > > + > > > + for (idx = 0; idx < count; idx++) { > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0); > > > + rte_mbuf_refcnt_set(mbufs[idx], 1); > > > + rte_pktmbuf_reset(mbufs[idx]); > > > + } > > > + return rc; > > > +} > > > + > > > +/** > > > + * Free a bulk of mbufs into its original mempool. > > > + * This function assumes: > > > + * - refcnt equals 1 > > > + * - mbufs are direct > > > + * - all mbufs must belong to the same mempool > > > + * > > > + * @param mbufs > > > + * Array of pointers to mbuf > > > + * @param count > > > + * Array size > > > + */ > > > +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs, > > > + unsigned count) > > > +{ > > > + unsigned idx; > > > + > > > + RTE_MBUF_ASSERT(count > 0); > > > + > > > + for (idx = 0; idx < count; idx++) { > > > + RTE_MBUF_ASSERT(mbufs[idx]->pool == mbufs[0]->pool); > > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 1); > > > + rte_mbuf_refcnt_set(mbufs[idx], 0); > > This is really a misuse of the API. The entire point of reference > > counting is > > to know when an mbuf has no more references and can be freed. By forcing > > all > > the reference counts to zero here, you allow the refcnt infrastructure to > > be > > circumvented, causing memory leaks. > > > > I think what you need to do here is enhance the underlying pktmbuf > > interface > > such that an rte_mbuf structure has a destructor method association with it > > which is called when its refcnt reaches zero. That way the > > rte_pktmbuf_bulk_free function can just decrement the refcnt on each > > mbuf_structure, and the pool as a whole can be returned when the destructor > > function discovers that all mbufs in that bulk pool are freed. > > > > Neil > > > >