On Tue, 24 Jun 2014 23:32:15 +0100
Bruce Richardson <bruce.richardson at intel.com> wrote:

>  
> +static void
> +free_unsent_pkts(struct rte_mbuf **pkts, uint16_t unsent,
> +             void *userdata __rte_unused)
> +{
> +     unsigned i;
> +     for (i = 0; i < unsent; i++)
> +             rte_pktmbuf_free(pkts[i]);
> +}
> +

This should be moved into mbuf layer, and there it could be
optimized to do a rte_mempool_mp_put_bulk. This would speed
up operations becuase it would mean a single ring operation
per set rather per mbuf segment.

Of course, the optimization would have to handle the refcnt
issues.

Reply via email to