On 5/26/2023 3:41 PM, Burakov, Anatoly wrote:
On 5/26/2023 4:41 AM, Fengnan Chang wrote:
Under legacy mode, if the number of continuous memsegs greater
than RTE_MAX_MEMSEG_PER_LIST, eal init will failed even though
another memseg list is empty, because only one memseg list used
to check in remap_needed_hugepages.
Fix this by make remap_segment return how many segments mapped,
remap_segment try to map most contiguous segments it can, if
exceed it's capbility, remap_needed_hugepages will continue to
map other left pages.
For example:
hugepage configure:
cat
/sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
10241
10239
startup log:
EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
EAL: Creating 4 segment lists: n_segs:8192 socket_id:0
hugepage_sz:2097152
EAL: Creating 4 segment lists: n_segs:8192 socket_id:1
hugepage_sz:2097152
EAL: Requesting 13370 pages of size 2MB from socket 0
EAL: Requesting 7110 pages of size 2MB from socket 1
EAL: Attempting to map 14220M on socket 1
EAL: Allocated 14220M on socket 1
EAL: Attempting to map 26740M on socket 0
EAL: Could not find space for memseg. Please increase 32768 and/or
65536 in
configuration.
EAL: Couldn't remap hugepage files into memseg lists
EAL: FATAL: Cannot init memory
EAL: Cannot init memory
Signed-off-by: Fengnan Chang <changfeng...@bytedance.com>
Signed-off-by: Lin Li <lilint...@bytedance.com>
Signed-off-by: Burakov Anatoly <anatoly.bura...@intel.com>
Hi,
Thanks for taking my suggested implementation on board!
---
lib/eal/linux/eal_memory.c | 55 +++++++++++++++++++++++++-------------
1 file changed, 36 insertions(+), 19 deletions(-)
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index 60fc8cc6ca..085defdee5 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -681,6 +681,7 @@ remap_segment(struct hugepage_file *hugepages, int
seg_start, int seg_end)
/* find free space in memseg lists */
for (msl_idx = 0; msl_idx < RTE_MAX_MEMSEG_LISTS; msl_idx++) {
+ int free_len;
bool empty;
msl = &mcfg->memsegs[msl_idx];
arr = &msl->memseg_arr;
@@ -692,24 +693,28 @@ remap_segment(struct hugepage_file *hugepages,
int seg_start, int seg_end)
/* leave space for a hole if array is not empty */
empty = arr->count == 0;
- ms_idx = rte_fbarray_find_next_n_free(arr, 0,
- seg_len + (empty ? 0 : 1));
-
- /* memseg list is full? */
- if (ms_idx < 0)
+ /* find start of the biggest contiguous block and its size */
+ ms_idx = rte_fbarray_find_biggest_free(arr, 0);
+ free_len = rte_fbarray_find_contig_free(arr, ms_idx);
+ if (free_len < 0)
continue;
Missed this.
When we're splitting up segments, we're looking for contiguous free
areas that are at least two memsegs long, so if this memseg is full but
contains segments that were split up, there will be a bunch of holes in
it, and free_len will be 1 (because every hole will be 1 segment long by
definition). So, we should not accept anything that is at least two
segments long.
--
Thanks,
Anatoly