Adding an additional failure path in DMA mask check has exposed an issue where `hugepage` pointer may point to memory that has already been unmapped, but pointer value is still not NULL, so failure handler will attempt to unmap it second time if DMA mask check fails. Fix it by setting `hugepage` pointer to NULL once it is no longer needed.
Coverity ID: 325730 Fixes: 165c89b84538 ("mem: use DMA mask check for legacy memory") Cc: alejandro.luc...@netronome.com Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com> --- lib/librte_eal/linuxapp/eal/eal_memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index c1b5e0791..48b23ce19 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -1617,6 +1617,7 @@ eal_legacy_hugepage_init(void) tmp_hp = NULL; munmap(hugepage, nr_hugefiles * sizeof(struct hugepage_file)); + hugepage = NULL; /* we're not going to allocate more pages, so release VA space for * unused memseg lists -- 2.17.1