> -----Original Message----- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ananyev, Konstantin > Sent: Thursday, December 18, 2014 5:43 PM > To: Newman Poborsky; dev at dpdk.org > Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM > > Hi > > > -----Original Message----- > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Newman Poborsky > > Sent: Thursday, December 18, 2014 1:26 PM > > To: dev at dpdk.org > > Subject: [dpdk-dev] rte_mempool_create fails with ENOMEM > > > > Hi, > > > > could someone please provide any explanation why sometimes mempool creation > > fails with ENOMEM? > > > > I run my test app several times without any problems and then I start > > getting ENOMEM error when creating mempool that are used for packets. I try > > to delete everything from /mnt/huge, I increase the number of huge pages, > > remount /mnt/huge but nothing helps. > > > > There is more than enough memory on server. I tried to debug > > rte_mempool_create() call and it seems that after server is restarted free > > mem segments are bigger than 2MB, but after running test app for several > > times, it seems that all free mem segments have a size of 2MB, and since I > > am requesting 8MB for my packet mempool, this fails. I'm not really sure > > that this conclusion is correct. > > Yes,rte_mempool_create uses rte_memzone_reserve() to allocate > single physically continuous chunk of memory. > If no such chunk exist, then it would fail. > Why physically continuous? > Main reason - to make things easier for us, as in that case we don't have to > worry > about situation when mbuf crosses page boundary. > So you can overcome that problem like that: > Allocate max amount of memory you would need to hold all mbufs in worst case > (all pages physically disjoint) > using rte_malloc().
Actually my wrong: rte_malloc()s wouldn't help you here. You probably need to allocate some external (not managed by EAL) memory in that case, may be mmap() with MAP_HUGETLB, or something similar. > Figure out it's physical mappings. > Call rte_mempool_xmem_create(). > You can look at: app/test-pmd/mempool_anon.c as a reference. > It uses same approach to create mempool over 4K pages. > > We probably add similar function into mempool API (create_scatter_mempool or > something) > or just add a new flag (USE_SCATTER_MEM) into rte_mempool_create(). > Though right now it is not there. > > Another quick alternative - use 1G pages. > > Konstantin > > > > > Does anybody have any idea what to check and how running my test app > > several times affects hugepages? > > > > For me, this doesn't make any since because after test app exits, resources > > should be freed, right? > > > > This has been driving me crazy for days now. I tried reading a bit more > > theory about hugepages, but didn't find out anything that could help me. > > Maybe it's something else and completely trivial, but I can't figure it > > out, so any help is appreciated. > > > > Thank you! > > > > BR, > > Newman P.