> From: Mattias Rönnblom [mailto:hof...@lysator.liu.se] > Sent: Friday, 8 November 2024 23.23 > > On 2024-11-08 20:53, Morten Brørup wrote: > >> From: Morten Brørup [mailto:m...@smartsharesystems.com] > >> Sent: Friday, 8 November 2024 19.35 > >> > >>> From: David Marchand [mailto:david.march...@redhat.com] > >>> Sent: Friday, 8 November 2024 19.18 > >>> > >>> OVS locks all pages to avoid page faults while processing packets. > > > > It sounds smart, so I just took a look at how it does this. I'm not > sure, but it seems like it only locks pages that are actually mapped > (current and future). > > > > mlockall(MLOCK_CURRENT) will bring in the whole BSS, it seems. Plus all > the rest like unused parts of the execution stacks, the data section > and > unused code (text) in the binary and all libraries it has linked to. > > It makes a simple (e.g., a unit test) DPDK 24.07 program use ~33x more > residential memory. After lcore variables, the same MLOCK_CURRENT-ed > program is ~30% larger than before. So, a relatively modest increase.
Thank you for testing this, Mattias. What are the absolute numbers, i.e. in KB, to get an idea of the numbers I should be looking for? I wonder why the footprint grows at all... Intuitively the same variables should consume approximately the same amount of RAM, regardless how they are allocated. Speculating... The lcore_states were allocated through rte_calloc() and thus used some space in the already allocated hugepages, so they didn't add more pages to the footprint. But they do when allocated and initialized as lcore variables. The first lcore variable allocated/initialized uses RTE_MAX_LCORE (128) pages of 4 KB each = 512 KB total. It seems unlikely that adding 512 KB increases the footprint by 30 %. > > The numbers are less drastic, obviously, for many real-world programs, > which have large packet pools and other memory hogs. Agree. However, it would be good to understand why switching to lcore variables has this effect on the footprint when using mlockall() like OVS.