Hi, >> Hi David, >> >> On 02/16/2016 03:48 PM, David Hunt wrote: >>> adds a simple stack based mempool handler >>> >>> Signed-off-by: David Hunt <david.hunt at intel.com> >>> --- >>> lib/librte_mempool/Makefile | 2 +- >>> lib/librte_mempool/rte_mempool.c | 4 +- >>> lib/librte_mempool/rte_mempool.h | 1 + >>> lib/librte_mempool/rte_mempool_stack.c | 164 >>> +++++++++++++++++++++++++++++++++ >>> 4 files changed, 169 insertions(+), 2 deletions(-) create mode >>> 100644 lib/librte_mempool/rte_mempool_stack.c >>> >> >> I don't get what is the purpose of this handler. Is it an example or is it >> something that could be useful for dpdk applications? >> > This is actually something that is useful for pipelining apps, > where the mempool cache doesn't really work - example, where we > have one core doing rx (and alloc), and another core doing > Tx (and return). In such a case, the mempool ring simply cycles > through all the mbufs, resulting in a LLC miss on every mbuf > allocated when the number of mbufs is large. A stack recycles > buffers more effectively in this case. >
While I agree on the principle, if this is the case the commit should come with an explanation about when this handler should be used, a small test report showing the performance numbers and probably an example app. Also, I think there is a some room for optimizations, especially I don't think that the spinlock will scale with many cores. Regards, Olivier