On 5/6/2025 7:50 PM, Jake Freeland wrote:
Use rte_fbarray_is_used() to check if the previous fbarray entry is
already empty.
Using prev_ms_idx to do this is flawed in cases where we loop through
multiple memseg lists. Each memseg list has its own count and length,
so using a prev_ms_idx from one memseg list to check for used entries
in another non-empty memseg list can lead to incorrect hole placement.
Signed-off-by: Jake Freeland <jf...@freebsd.org>
---
lib/eal/freebsd/eal_memory.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index 3b72e13506..bcf5a6f986 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -104,7 +104,6 @@ rte_eal_hugepage_init(void)
for (i = 0; i < internal_conf->num_hugepage_sizes; i++) {
struct hugepage_info *hpi;
rte_iova_t prev_end = 0;
- int prev_ms_idx = -1;
uint64_t page_sz, mem_needed;
unsigned int n_pages, max_pages;
@@ -168,9 +167,9 @@ rte_eal_hugepage_init(void)
if (ms_idx < 0)
continue;
I guess an alternative fix would be to reset prev_ms_idx after (ms_idx <
0) check above?
- if (need_hole && prev_ms_idx == ms_idx - 1)
+ if (need_hole &&
+ rte_fbarray_is_used(arr, ms_idx - 1))
ms_idx++;
- prev_ms_idx = ms_idx;
break;
}
--
Thanks,
Anatoly