When RTE_EAL_NUMA_AWARE_HUGEPAGES is set to "n", not all memtypes
will be valid, because we skip some due to not supporting other
NUMA nodes, leading to a division by zero error down the line
because the necessary memtype fields weren't populated.

Fix it by limiting number of memtypes to number of memtypes we
have actually created.

Fixes: 1dd342d0fdc4 ("mem: improve segment list preallocation")
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c 
b/lib/librte_eal/linuxapp/eal/eal_memory.c
index 6f94621d4..32feb415d 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -2230,6 +2230,8 @@ memseg_primary_init(void)
                                socket_id, hugepage_sz);
                }
        }
+       /* number of memtypes could have been lower due to no NUMA support */
+       n_memtypes = cur_type;
 
        /* set up limits for types */
        max_mem = (uint64_t)RTE_MAX_MEM_MB << 20;
-- 
2.17.1

Reply via email to