On 5/26/2023 4:41 AM, Fengnan Chang wrote:
Under legacy mode, if the number of continuous memsegs greater
than RTE_MAX_MEMSEG_PER_LIST, eal init will failed even though
another memseg list is empty, because only one memseg list used
to check in remap_needed_hugepages.
Fix this by make remap_segment return how many segments mapped,
remap_segment try to map most contiguous segments it can, if
exceed it's capbility, remap_needed_hugepages will continue to
map other left pages.
For example:
hugepage configure:
cat /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
10241
10239
startup log:
EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
EAL: Requesting 13370 pages of size 2MB from socket 0
EAL: Requesting 7110 pages of size 2MB from socket 1
EAL: Attempting to map 14220M on socket 1
EAL: Allocated 14220M on socket 1
EAL: Attempting to map 26740M on socket 0
EAL: Could not find space for memseg. Please increase 32768 and/or 65536 in
configuration.
EAL: Couldn't remap hugepage files into memseg lists
EAL: FATAL: Cannot init memory
EAL: Cannot init memory
Signed-off-by: Fengnan Chang <changfeng...@bytedance.com>
Signed-off-by: Lin Li <lilint...@bytedance.com>
Signed-off-by: Burakov Anatoly <anatoly.bura...@intel.com>
Hi,
Thanks for taking my suggested implementation on board!
---
lib/eal/linux/eal_memory.c | 55 +++++++++++++++++++++++++-------------
1 file changed, 36 insertions(+), 19 deletions(-)
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index 60fc8cc6ca..085defdee5 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -681,6 +681,7 @@ remap_segment(struct hugepage_file *hugepages, int
seg_start, int seg_end)
/* find free space in memseg lists */
for (msl_idx = 0; msl_idx < RTE_MAX_MEMSEG_LISTS; msl_idx++) {
+ int free_len;
bool empty;
msl = &mcfg->memsegs[msl_idx];
arr = &msl->memseg_arr;
@@ -692,24 +693,28 @@ remap_segment(struct hugepage_file *hugepages, int
seg_start, int seg_end)
/* leave space for a hole if array is not empty */
empty = arr->count == 0;
- ms_idx = rte_fbarray_find_next_n_free(arr, 0,
- seg_len + (empty ? 0 : 1));
-
- /* memseg list is full? */
- if (ms_idx < 0)
+ /* find start of the biggest contiguous block and its size */
+ ms_idx = rte_fbarray_find_biggest_free(arr, 0);
+ free_len = rte_fbarray_find_contig_free(arr, ms_idx);
+ if (free_len < 0)
continue;
Technically, ms_idx can return -1, which should not be passed to
rte_fbarray_find_contig_free() because the index value it accepts is
unsigned (meaning, -1 will translate to UINT32_MAX). This *would* cause
failure at parameter checking, so in practice it's not a bug, but I'm
pretty sure code analyzers will complain about it, so the control flow
needs to be changed somewhat.
Specifically, we should check for `ms_idx < 0` and continue, before
using it in `rte_fbarray_find_contig_free()`.
-
/* leave some space between memsegs, they are not IOVA
* contiguous, so they shouldn't be VA contiguous either.
*/
- if (!empty)
+ if (!empty) {
ms_idx++;
+ free_len--;
+ }
+ printf("free len %d seg_len %d ms_idx %d\n", free_len, seg_len,
ms_idx);
Debugging leftover?
+ /* we might not get all of the space we wanted */
+ free_len = RTE_MIN(seg_len, free_len);
+ seg_end = seg_start + free_len;
+ seg_len = seg_end - seg_start;
break;
}
if (msl_idx == RTE_MAX_MEMSEG_LISTS) {
- RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase
%s and/or %s in configuration.\n",
- RTE_STR(RTE_MAX_MEMSEG_PER_TYPE),
- RTE_STR(RTE_MAX_MEM_MB_PER_TYPE));
+ RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase
RTE_MAX_MEMSEG_PER_LIST "
+ "RTE_MAX_MEMSEG_PER_TYPE and/or
RTE_MAX_MEM_MB_PER_TYPE in configuration.\n");
I don't think this change should be part of this patch, as this is
fixing a separate issue. If we are to fix it, it should be a separate patch.
With the above changes,
Reviewed-by: Anatoly Burakov <anatoly.bura...@intel.com>
--
Thanks,
Anatoly