This isn't documented in the manuals, but a failed mmap(..., MAP_FIXED) may still unmap overlapping regions. In such case, we need to remap these regions back into our address space to ensure mem contiguity. We do it unconditionally now on mmap failure just to be safe.
Verified on Linux 4.9.0-4-amd64. I was getting ENOMEM when trying to map hugetlbfs with no space left, and the previous anonymous mapping was still being removed. Changes from v2: * added "git fixline" tags Changes from v1: * checkpatch fixes * remapping is now done regardless of the mmap errno Fixes: 582bed1e1d1d ("mem: support mapping hugepages at runtime") Cc: anatoly.bura...@intel.com Cc: sta...@dpdk.org Signed-off-by: Dariusz Stojaczyk <dariuszx.stojac...@intel.com> --- lib/librte_eal/linuxapp/eal/eal_memalloc.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c b/lib/librte_eal/linuxapp/eal/eal_memalloc.c index 6be6680..81c94d5 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memalloc.c +++ b/lib/librte_eal/linuxapp/eal/eal_memalloc.c @@ -527,7 +527,10 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, if (va == MAP_FAILED) { RTE_LOG(DEBUG, EAL, "%s(): mmap() failed: %s\n", __func__, strerror(errno)); - goto resized; + /* mmap failed, but the previous region might have been + * unmapped anyway. try to remap it + */ + goto unmapped; } if (va != addr) { RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__); @@ -588,6 +591,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, mapped: munmap(addr, alloc_sz); +unmapped: flags = MAP_FIXED; #ifdef RTE_ARCH_PPC_64 flags |= MAP_HUGETLB; -- 2.7.4