On 07-Oct-18 8:31 PM, Darek Stojaczyk wrote:
RTE_MEMZONE_SIZE_HINT_ONLY wasn't checked in any way,
causing size hints to be parsed as hard requirements.
This resulted in some allocations being failed prematurely.
Fixes: 68b6092bd3c7 ("malloc: allow reserving biggest element")
Cc: anatoly.bura...@intel.com
Cc: sta...@dpdk.org
Signed-off-by: Darek Stojaczyk <dariusz.stojac...@intel.com>
---
lib/librte_eal/common/malloc_heap.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/lib/librte_eal/common/malloc_heap.c
b/lib/librte_eal/common/malloc_heap.c
index ac7bbb3ba..d2a8bd8dc 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -165,7 +165,9 @@ find_biggest_element(struct malloc_heap *heap, size_t *size,
for (elem = LIST_FIRST(&heap->free_head[idx]);
!!elem; elem = LIST_NEXT(elem, free_list)) {
size_t cur_size;
- if (!check_hugepage_sz(flags, elem->msl->page_sz))
+ if ((flags & RTE_MEMZONE_SIZE_HINT_ONLY) == 0 &&
+ !check_hugepage_sz(flags,
+ elem->msl->page_sz))
continue;
Reviewed-by: Anatoly Burakov <anatoly.bura...@intel.com>
Although to be frank, the whole concept of "reserving biggest available
memzone" is currently broken because of dynamic memory allocation. There
is currently no way to allocate "as many hugepages as you can", so we're
only looking at memory already allocated, which in the general case is
less than page size long (unless you use legacy mode or memory
preallocation switches).
--
Thanks,
Anatoly