Previously, if mmap failed to map page address at requested address, we were attempting to unmap the wrong address. Fix it by unmapping our actual mapped address, and jump further to avoid unmapping memory that is not allocated.
Coverity issue: 272602 Fixes: 582bed1e1d1d ("mem: support mapping hugepages at runtime") Cc: anatoly.bura...@intel.com Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com> Acked-by: Bruce Richardson <bruce.richard...@intel.com> --- lib/librte_eal/linuxapp/eal/eal_memalloc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c b/lib/librte_eal/linuxapp/eal/eal_memalloc.c index 604ce6d..a40cfd3 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memalloc.c +++ b/lib/librte_eal/linuxapp/eal/eal_memalloc.c @@ -466,7 +466,8 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, } if (va != addr) { RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__); - goto mapped; + munmap(va, alloc_sz); + goto resized; } rte_iova_t iova = rte_mem_virt2iova(addr); -- 2.7.4