> From: Vadim Suraev [mailto:vadim.suraev at gmail.com]
> Sent: Friday, February 27, 2015 12:19 PM
> To: Ananyev, Konstantin
> Cc: dev at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing 
> optimization
> 
> Hi, Konstantin,
> >Seems really useful.
> >One thought - why to introduce the limitation that all mbufs have to be from 
> >the same mempool?
> >I think you can reorder it a bit, so it can handle situation when chained 
> >mbufs belong to different mempools.
> I had a doubt, my concern was how practical is that (multiple mempools) case?

Well, inside DPDK we have at least 2 samples: ip_fragmentation and 
ipv4_multicast that chain together mbufs from different pools.  
How often that occurs in 'real world' apps - I am not sure.

> Do you think there should be two versions: lightweight (with the restriction) 
> and generic?

I'd suggest to measure what is the performance difference between these 2 
versions.
If the difference is noticeable, then probably it is better to have 2 versions.
If it would be neglectable, then I suppose just generic is good enough. 

Konstantin

> 
> >Actually probably would be another useful function to have:
> >rte_pktmbuf_free_seg_bulk(struct rte_mbuf *m[], uint32_t num);
> Yes, this could be a sub-routine of rte_pktmbuf_free_chain()
> Regards,
> ?Vadim.
> 
> On Feb 27, 2015 3:18 PM, "Ananyev, Konstantin" <konstantin.ananyev at 
> intel.com> wrote:
> Hi Vadim,
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of vadim.suraev at 
> > gmail.com
> > Sent: Thursday, February 26, 2015 11:15 PM
> > To: dev at dpdk.org
> > Subject: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing 
> > optimization
> >
> > From: "vadim.suraev at gmail.com" <vadim.suraev at gmail.com>
> >
> > new function - rte_pktmbuf_free_bulk makes freeing long
> > scattered (chained) pktmbufs belonging to the same pool
> > more optimal using rte_mempool_put_bulk rather than calling
> > rte_mempool_put for each segment.
> > Inlike rte_pktmbuf_free, which calls rte_pktmbuf_free_seg,
> > this function calls __rte_pktmbuf_prefree_seg. If non-NULL
> > returned, the pointer is placed in an array. When array is
> > filled or when the last segment is processed, rte_mempool_put_bulk
> > is called. In case of multiple producers, performs 3 times better.
> >
> >
> > Signed-off-by: vadim.suraev at gmail.com <vadim.suraev at gmail.com>
> > ---
> >? lib/librte_mbuf/rte_mbuf.h |? ?55 
> >++++++++++++++++++++++++++++++++++++++++++++
> >? 1 file changed, 55 insertions(+)
> >
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 17ba791..1d6f848 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -824,6 +824,61 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
> >? ? ? ?}
> >? }
> >
> > +/* This macro defines the size of max bulk of mbufs to free for 
> > rte_pktmbuf_free_bulk */
> > +#define MAX_MBUF_FREE_SIZE 32
> > +
> > +/* If RTE_LIBRTE_MBUF_DEBUG is enabled, checks if all mbufs must belong to 
> > the same mempool */
> > +#ifdef RTE_LIBRTE_MBUF_DEBUG
> > +
> > +#define RTE_MBUF_MEMPOOL_CHECK1(m) struct rte_mempool 
> > *first_buffers_mempool = (m) ? (m)->pool : NULL
> > +
> > +#define RTE_MBUF_MEMPOOL_CHECK2(m) RTE_MBUF_ASSERT(first_buffers_mempool 
> > == (m)->pool)
> > +
> > +#else
> > +
> > +#define RTE_MBUF_MEMPOOL_CHECK1(m)
> > +
> > +#define RTE_MBUF_MEMPOOL_CHECK2(m)
> > +
> > +#endif
> > +
> > +/**
> > + * Free chained (scattered) mbuf into its original mempool.
> > + *
> > + * All the mbufs in the chain must belong to the same mempool.
> 
> Seems really useful.
> One thought - why to introduce the limitation that all mbufs have to be from 
> the same mempool?
> I think you can reorder it a bit, so it can handle situation when chained 
> mbufs belong to different mempools.
> Something like:
> ...
> mbufs[mbufs_count] = head;
> if (unlikely (head->mp != mbufs[0]->mp || mbufs_count == RTE_DIM(mbufs) - 1)) 
> {
> ? ? rte_mempool_put_bulk(mbufs[0]->pool, mbufs, mbufs_count);
> ? ? mbufs[0] = mbufs[mbufs_count];
> ? ? mbufs_count = 0;
> }
> mbufs_count++;
> ...
> 
> Another nit: probably better name it rte_pktmbuf_free_chain() or something?
> For me _bulk implies that we have an array of mbufs that we need to free.
> Actually probably would be another useful function to have:
> rte_pktmbuf_free_seg_bulk(struct rte_mbuf *m[], uint32_t num);
> 
> Konstantin
> 
> > + *
> > + * @param head
> > + *? ?The head of mbufs to be freed chain
> > + */
> > +
> > +static inline void __attribute__((always_inline))
> > +rte_pktmbuf_free_bulk(struct rte_mbuf *head)
> > +{
> > +? ? void *mbufs[MAX_MBUF_FREE_SIZE];
> > +? ? unsigned mbufs_count = 0;
> > +? ? struct rte_mbuf *next;
> > +
> > +? ? RTE_MBUF_MEMPOOL_CHECK1(head);
> > +
> > +? ? while(head) {
> > +? ? ? ? next = head->next;
> > +? ? ? ? head->next = NULL;
> > +? ? ? ? if(__rte_pktmbuf_prefree_seg(head)) {
> > +? ? ? ? ? ? RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0);
> > +? ? ? ? ? ? RTE_MBUF_MEMPOOL_CHECK2(head);
> > +? ? ? ? ? ? mbufs[mbufs_count++] = head;
> > +? ? ? ? }
> > +? ? ? ? head = next;
> > +? ? ? ? if(mbufs_count == MAX_MBUF_FREE_SIZE) {
> > +? ? ? ? ? ? rte_mempool_put_bulk(((struct rte_mbuf 
> > *)mbufs[0])->pool,mbufs,mbufs_count);
> > +? ? ? ? ? ? mbufs_count = 0;
> > +? ? ? ? }
> > +? ? }
> > +? ? if(mbufs_count > 0) {
> > +? ? ? ? rte_mempool_put_bulk(((struct rte_mbuf 
> > *)mbufs[0])->pool,mbufs,mbufs_count);
> > +? ? }
> > +}
> > +
> >? /**
> >? ?* Creates a "clone" of the given packet mbuf.
> >? ?*
> > --
> > 1.7.9.5

Reply via email to