When --no-huge mode is used, the memory is currently allocated with mmap(NULL, ...). This is fine in most cases, but can fail in cases where DPDK is run on a machine with an IOMMU that is of more limited address width than that of a VA, because we're not specifying the address hint for mmap() call.
Fix it by preallocating VA space before mapping it. Cc: sta...@dpdk.org Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com> --- Notes: I couldn't figure out which specific commit has introduced the issue, so there's no fix tag. The most likely candidate is one that introduced the DMA mask thing in the first place but i'm not sure. lib/librte_eal/linux/eal/eal_memory.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/linux/eal/eal_memory.c b/lib/librte_eal/linux/eal/eal_memory.c index 43e4ffc757..672f8806dd 100644 --- a/lib/librte_eal/linux/eal/eal_memory.c +++ b/lib/librte_eal/linux/eal/eal_memory.c @@ -1340,6 +1340,8 @@ eal_legacy_hugepage_init(void) /* hugetlbfs can be disabled */ if (internal_config.no_hugetlbfs) { + void *prealloc_addr; + size_t mem_sz; struct rte_memseg_list *msl; int n_segs, cur_seg, fd, flags; #ifdef MEMFD_SUPPORTED @@ -1395,8 +1397,21 @@ eal_legacy_hugepage_init(void) } } #endif - addr = mmap(NULL, internal_config.memory, PROT_READ | PROT_WRITE, - flags, fd, 0); + /* preallocate address space for the memory, so that it can be + * fit into the DMA mask. + */ + mem_sz = internal_config.memory; + prealloc_addr = eal_get_virtual_area( + NULL, &mem_sz, page_sz, 0, 0); + if (prealloc_addr == NULL) { + RTE_LOG(ERR, EAL, + "%s: reserving memory area failed: " + "%s\n", + __func__, strerror(errno)); + return -1; + } + addr = mmap(prealloc_addr, internal_config.memory, + PROT_READ | PROT_WRITE, flags, fd, MAP_FIXED); if (addr == MAP_FAILED) { RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, strerror(errno)); -- 2.17.1