On 7/25/22 22:32, David Marchand wrote:
A mempool consumes 3 memzones (with the default ring mempool driver). The default DPDK configuration allows RTE_MAX_MEMZONE (2560) memzones. Assuming there is no other memzones that means that we can have a maximum of 853 mempools. In the vhost library, the IOTLB cache code so far was requesting a mempool per vq, which means that at the maximum, the vhost library could request mempools for 426 qps. This limit was recently reached on big systems with a lot of virtio ports (and multiqueue in use). While the limit on mempool count could be something we fix at the DPDK project level, there is no reason to use mempools for the IOTLB cache: - the IOTLB cache entries do not need to be DMA-able and are only used by the current process (in multiprocess context), - getting/putting objects from/in the mempool is always associated with some other locks, so some level of lock contention is already present, We can convert to a malloc'd pool with objects put in a free list protected by a spinlock. Signed-off-by: David Marchand <david.march...@redhat.com> --- lib/vhost/iotlb.c | 102 ++++++++++++++++++++++++++++------------------ lib/vhost/iotlb.h | 1 + lib/vhost/vhost.c | 2 +- lib/vhost/vhost.h | 4 +- 4 files changed, 67 insertions(+), 42 deletions(-)
Thanks for working on this, this is definitely not needed to use mempool for this. Reviewed-by: Maxime Coquelin <maxime.coque...@redhat.com> Maxime