Currently, populate_virt will check if mempool is already populated. This will cause inability to reserve multi-chunk mempools if contiguous memory is not a hard requirement, because if allocating all-contiguous memory fails, mempool will retry with virtual addresses and will call populate_virt. It seems that the original code never anticipated more than one non-physically contiguous area.
Fix it by removing the check in populate virt. populate_anon() function calls populate_virt() also, and it can be reasonably inferred that it is expecting that virtual area is not already populated. Even though a similar check is already in place there, also add the check that was part of populate_virt() just in case. Fixes: aab4f62d6c1c ("mempool: support no hugepage mode") Cc: olivier.m...@6wind.com Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com> --- lib/librte_mempool/rte_mempool.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 9f1a425..8c8b9f8 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -492,9 +492,6 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, size_t off, phys_len; int ret, cnt = 0; - /* mempool must not be populated */ - if (mp->nb_mem_chunks != 0) - return -EEXIST; /* address and len must be page-aligned */ if (RTE_PTR_ALIGN_CEIL(addr, pg_sz) != addr) return -EINVAL; @@ -771,7 +768,7 @@ rte_mempool_populate_anon(struct rte_mempool *mp) char *addr; /* mempool is already populated, error */ - if (!STAILQ_EMPTY(&mp->mem_list)) { + if ((!STAILQ_EMPTY(&mp->mem_list)) || mp->nb_mem_chunks != 0) { rte_errno = EINVAL; return 0; } -- 2.7.4