Hi,

I'm a bit confused about certain semantics of `cache_size` in memory
pools. I'm working on a DPDK application where each Rx queue gets its
own mbuf mempool. The memory pools are never shared between lcores,
mbufs are never passed between lcores, and so the deallocation of an
mbuf will happen on the same lcore where it was allocated on (it is a
run-to-completion application). Is my understanding correct, that this
completely eliminates any lock contention, and so `cache_size` can
safely be set to 0?
Also, `rte_pktmbuf_pool_create()` internally calls
`rte_mempool_create()` with the default `flags`. Would there be a
performance benefit in creating mempools manually with the
`RTE_MEMPOOL_F_SP_PUT` and `RTE_MEMPOOL_F_SC_GET` flags set?

Thanks!

Sincerely,
Igor.

Reply via email to