> On 25 Oct 2018, at 01:57, Stephen Hemminger <step...@networkplumber.org> 
> wrote:
> 
> On Wed, 24 Oct 2018 23:09:15 +0200
> Damjan Marion <dmar...@me.com> wrote:
> 
>> — 
>> Damjan
>> 
>>> On 24 Oct 2018, at 23:04, Stephen Hemminger <step...@networkplumber.org> 
>>> wrote:
>>> 
>>> On Wed, 24 Oct 2018 21:07:15 +0200
>>> Damjan Marion <dmar...@me.com> wrote:
>>> 
>>>>> On 24 Oct 2018, at 20:41, Stephen Hemminger <step...@networkplumber.org> 
>>>>> wrote:
>>>>> 
>>>>> On Wed, 24 Oct 2018 11:31:38 -0700
>>>>> "Stephen Hemminger" <step...@networkplumber.org 
>>>>> <mailto:step...@networkplumber.org>> wrote:    
>>>> 
>>>> [snip]
>>>> 
>>>>> 
>>>>> The problem is that the setup code decides to make 1 huge page. And since 
>>>>> huge page size
>>>>> on this system is 256M, DPDK can't even get stared.    
>>>> 
>>>> Already fixed in master... (hopefully works on Aarch64)
>>>> 
>>> 
>>> Still broken, added some instrumentation to clib_sysfs_prealloc_hugepages
>>> and got.
>>> 
>>> clib_sysfs_prealloc_hugepages:261: found log2=21 page_size=2048 n=0
>>> clib_sysfs_prealloc_hugepages:264: pre-allocating 1 additional 2048K 
>>> hugepages on numa node 0
>>> clib_sysfs_set_nr_hugepages:158: set 
>>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages to 1
>>> 
>>> DPDK won't start with on 2M of memory.  
>> 
>> Yeah, i already fixed that but it is not merged yet. This is just prealloc 
>> issue, if pages are preallocated everything should work ok.
> 
> Got it, fixed for me.
> Wonder why CI never caught this??

Likely pages are pre-allocated on boot time which is not bad thing to do.
I added this sysfs prealloc functionality as a last attempt to survive if pages 
are not there...


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10983): https://lists.fd.io/g/vpp-dev/message/10983
Mute This Topic: https://lists.fd.io/mt/27570325/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to