On 2024-11-08 19:17, David Marchand wrote:
OVS locks all pages to avoid page faults while processing packets. 1M for each lcore translates to allocating 128M with default build options on x86. This resulted in OOM while running unit tests in parallel.
Could you give some more context. If you get OOM if you add 128 MB of RSS, how much memory have you budgeted for the app in total? What are the packet mempool sizes, for example?
If you are running tests in parallel it likely means these aren't characteristics tests, and thus you can disable the mlockall() call and fit many more copies under the OOM ceiling.
Another alternative might be to unlock the lcore variables area, or make the whole BSS MLOCK_ONFAULT.
At the moment, the more demanding DPDK user of lcore variable is rte_service, with a 2112 bytes object.
<rte_lcore_var.h> is a public API, so the largest object may well be much larger than that.
That said, maybe 1 MB is too large.
Limit the lcore variable maximum size to 4k which looks more reasonable.
NAK 128 kB?
Fixes: 5bce9bed67ad ("eal: add static per-lcore memory allocation facility")
I've mentioned this property of lcore variables a couple of times on the list, so it should come as no surprise to anyone.
Signed-off-by: David Marchand <david.march...@redhat.com> --- config/rte_config.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/config/rte_config.h b/config/rte_config.h index 498d509244..5f0627679f 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -41,7 +41,7 @@ /* EAL defines */ #define RTE_CACHE_GUARD_LINES 1 #define RTE_MAX_HEAPS 32 -#define RTE_MAX_LCORE_VAR 1048576 +#define RTE_MAX_LCORE_VAR 4096 #define RTE_MAX_MEMSEG_LISTS 128 #define RTE_MAX_MEMSEG_PER_LIST 8192 #define RTE_MAX_MEM_MB_PER_LIST 32768