Hi Olivier, <snip>
> > +Stack Mempool Driver > > +==================== > > + > > +**rte_mempool_stack** is a pure software mempool driver based on the > > +``rte_stack`` DPDK library. A stack-based mempool is often better suited to > > +packet-processing workloads than a ring-based mempool, since its LIFO > behavior > > +results in better temporal locality and a minimal memory footprint even if > > the > > +mempool is over-provisioned. > > Would it make sense to give an example of a use-case where the stack > driver should be used in place of the standard ring-based one? > > In most run-to-completion applications, the mbufs stay in per-core > caches, so changing the mempool driver won't have a big impact. However, > I suspect that for applications using a pipeline model (ex: rx on core0, > tx on core1), the stack model would be more efficient. Is it something > that you measured? If yes, it would be useful to explain this in the > documentation. > Good point, I was overlooking the impact of the per-core caches. I've seen data showing better overall packet throughput with the stack mempool, and indeed that was a pipelined application. How about this re-write? " **rte_mempool_stack** is a pure software mempool driver based on the ``rte_stack`` DPDK library. For run-to-completion workloads with sufficiently large per-lcore caches, the mbufs will likely stay in the per-lcore caches and the mempool type (ring, stack, etc.) will have a negligible impact on performance. However a stack-based mempool is often better suited to pipelined packet-processing workloads (which allocate and free mbufs on different lcores) than a ring-based mempool, since its LIFO behavior results in better temporal locality and a minimal memory footprint even if the mempool is over-provisioned. Users are encouraged to benchmark with multiple mempool types to determine which works best for their specific application. " Thanks, Gage