no

-- 
Damjan

> On 24 Oct 2018, at 16:17, Marco Varlese <mvarl...@suse.de> wrote:
> 
> Hi Damjan,
> 
> On Wed, 2018-10-24 at 16:14 +0200, Damjan Marion via Lists.Fd.Io wrote:
>> 
>> We merged patch which should fix things with 1G hugepages but I was not able 
>> to test it on arm, so please try...
> 
> Is this something which should go also on stable/1810 for a potential future 
> dot release?
> If so could you please cherry pick it to that branch?
> 
> 
>> 
>> -- 
>> Damjan
> 
> Cheers,
> Marco
> 
>> 
>>> On 24 Oct 2018, at 05:28, Sirshak Das <sirshak....@arm.com 
>>> <mailto:sirshak....@arm.com>> wrote:
>>> 2M works but 1G still fails.
>>>  
>>> I toned down the dpdk resource allocation to default:
>>> dpdk
>>> {
>>>   dev 0004:01:00.1
>>>   dev 0004:01:00.2
>>>   no-multi-seg
>>>   log-level debug
>>>   dev default
>>>   {
>>>     num-rx-queues 1
>>>     # num-tx-queues 4
>>>     num-rx-desc 2048
>>>     num-tx-desc 2048
>>>   }
>>>   # num-mbufs 128000
>>>   # socket-mem 2048,2048
>>>   no-tx-checksum-offload
>>>  
>>> But here is the problem (for 16G of Hugepage memory):
>>> With:
>>> 2MB (nr_hugepages: 8192)
>>> GRUB_CMDLINE_LINUX="default_hugepagesz=2M hugepagesz=1G hugepages=16 
>>> hugepagesz=2M hugepages=8192 iommu.passthrough=1 isolcpus=16-45 
>>> nohz_full=16-45 rcu_nocbs=16-45"
>>> vs 1GB (nr_hugepages: 16)
>>> GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=16 
>>> hugepagesz=2M hugepages=8192 iommu.passthrough=1 isolcpus=16-45 
>>> nohz_full=16-45 rcu_nocbs=16-45"
>>>  
>>> I am getting a performance improvement of 49% when I use 1G hugepages 
>>> compared to 2MB.
>>> I am not an expert on hugepages to pinpoint the exact reason but it will 
>>> surely help if you can fix the 1G hugepage issue.
>>>  
>>> Thank you
>>> Sirshak Das 
>>> From: Damjan Marion <dmar...@me.com <mailto:dmar...@me.com>> 
>>> Sent: Tuesday, October 23, 2018 3:43 PM
>>> To: Sirshak Das <sirshak....@arm.com <mailto:sirshak....@arm.com>>
>>> Cc: vpp-dev <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>>; Honnappa 
>>> Nagarahalli <honnappa.nagaraha...@arm.com 
>>> <mailto:honnappa.nagaraha...@arm.com>>; Lijian Zhang (Arm Technology China) 
>>> <lijian.zh...@arm.com <mailto:lijian.zh...@arm.com>>; khemendra kumar 
>>> <khemendra.kuma...@gmail.com <mailto:khemendra.kuma...@gmail.com>>; Juraj 
>>> Linkeš <juraj.lin...@pantheon.tech <mailto:juraj.lin...@pantheon.tech>>
>>> Subject: Re: [vpp-dev] running VPP non-root broken
>>>  
>>>  
>>> OMG, you are good in wasting memory. 1G pages, 2G per socket given to dpdk 
>>> to hang empty :)
>>> 128K buffers....
>>>  
>>> For a start can you switch default page size to 2M. newer x86 kernels 
>>> ignore it but maybe it behaves
>>> differently on aarch64...
>>>  
>>> In the meantime I will fix few coverity issues...
>>>  
>>> -- 
>>> Damjan
>>> 
>>> 
>>>> On 23 Oct 2018, at 20:45, Sirshak Das <sirshak....@arm.com 
>>>> <mailto:sirshak....@arm.com>> wrote:
>>>>  
>>>> Hi Damjan,
>>>>  
>>>> I am getting the following error as well I don’t know if its related to 
>>>> this issue:
>>>> vlib_plugin_early_init:361: plugin path 
>>>> /home/sirdas/code/commita/vpp/build-root/install-vpp-native/vpp/lib/vpp_plugins
>>>> load_one_plugin:117: Plugin disabled (default): abf_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): acl_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): avf_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): cdp_plugin.so
>>>> load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development 
>>>> Kit (DPDK))
>>>> load_one_plugin:117: Plugin disabled (default): flowprobe_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): gbp_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): gtpu_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): igmp_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): ila_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): ioam_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): l2e_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): lacp_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): lb_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): mactime_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): map_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): memif_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): nat_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): nsh_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): nsim_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): perfmon_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): pppoe_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): srv6ad_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): srv6am_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): srv6as_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): stn_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): svs_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): tlsmbedtls_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): tlsopenssl_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): unittest_plugin.so
>>>> load_one_plugin:117: Plugin disabled (default): vmxnet3_plugin.so
>>>> clib_elf_parse_file: open `linux-vdso.so.1': No such file or directory
>>>> load_one_vat_plugin:67: Loaded plugin: lb_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: gtpu_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: memif_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: nsh_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: nsim_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: avf_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: vmxnet3_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: pppoe_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: mactime_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: lacp_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: dpdk_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: cdp_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: flowprobe_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: nat_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: stn_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: acl_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: ioam_test_plugin.so
>>>> load_one_vat_plugin:67: Loaded plugin: map_test_plugin.so
>>>> vnet_feature_arc_init:206: feature node 'acl-plugin-out-ip6-fa' not found 
>>>> (before 'ip6-dvr-reinject', arc 'ip6-output')
>>>> vnet_feature_arc_init:206: feature node 'nat44-in2out-output' not found 
>>>> (before 'ip4-dvr-reinject', arc 'ip4-output')
>>>> vnet_feature_arc_init:206: feature node 'acl-plugin-out-ip4-fa' not found 
>>>> (before 'ip4-dvr-reinject', arc 'ip4-output')
>>>> vlib_physmem_shared_map_create: pmalloc_map_pages: failed to mmap 153 
>>>> pages at 0xfffaa3c00000 fd 23 numa 0 flags 0x42031: Invalid argument
>>>>  
>>>> dpdk_buffer_pool_create: failed to allocate mempool on socket 0
>>>>  
>>>>  
>>>> This is the startup.conf I am using: 
>>>>  
>>>> ip
>>>> {
>>>>   heap-size 4G
>>>> }
>>>> unix
>>>> {
>>>>   nodaemon
>>>>   interactive
>>>>   cli-listen localhost:5002
>>>>   log /home/sirdas/var/log/vpp/vpp.log
>>>> }
>>>> ip6
>>>> {
>>>>   heap-size 4G
>>>>   hash-buckets 2000000
>>>> }
>>>> heapsize 4G
>>>> plugins
>>>> {
>>>>   plugin default
>>>>   {
>>>>     disable
>>>>   }
>>>>   plugin dpdk_plugin.so
>>>>   {
>>>>     enable
>>>>   }
>>>> }
>>>> cpu
>>>> {
>>>>   corelist-workers 18,20
>>>>   main-core 17
>>>> }
>>>> dpdk
>>>> {
>>>>   dev 0004:01:00.1
>>>>   dev 0004:01:00.2
>>>>   no-multi-seg
>>>>   log-level debug
>>>>   dev default
>>>>   {
>>>>     num-rx-queues 2
>>>>     num-rx-desc 2048
>>>>     num-tx-desc 2048
>>>>   }
>>>>   num-mbufs 128000
>>>>   socket-mem 2048,2048
>>>>   no-tx-checksum-offload
>>>> }
>>>>  
>>>> More info for debugging:
>>>> Boot parameters:
>>>> default_hugepagesz=1G hugepagesz=1G hugepages=32 hugepagesz=2M 
>>>> hugepages=2048 iommu.passthrough=1 isolcpus=16-45 nohz_full=16-45 
>>>> rcu_nocbs=16-45
>>>>  
>>>> OS & Kernel:
>>>> Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-38-generic aarch64)
>>>>  
>>>> Meminfo:
>>>> $ cat /proc/meminfo 
>>>> MemTotal:       98827048 kB
>>>> MemFree:         2802448 kB
>>>> MemAvailable:    2730464 kB
>>>> Buffers:           44960 kB
>>>> Cached:           477832 kB
>>>> SwapCached:            0 kB
>>>> Active:           368112 kB
>>>> Inactive:         232532 kB
>>>> Active(anon):      79504 kB
>>>> Inactive(anon):     1312 kB
>>>> Active(file):     288608 kB
>>>> Inactive(file):   231220 kB
>>>> Unevictable:           0 kB
>>>> Mlocked:               0 kB
>>>> SwapTotal:      96505852 kB
>>>> SwapFree:       96505852 kB
>>>> Dirty:                24 kB
>>>> Writeback:             0 kB
>>>> AnonPages:         78024 kB
>>>> Mapped:           144960 kB
>>>> Shmem:              2956 kB
>>>> Slab:             207728 kB
>>>> SReclaimable:      67780 kB
>>>> SUnreclaim:       139948 kB
>>>> KernelStack:       10464 kB
>>>> PageTables:         2300 kB
>>>> NFS_Unstable:          0 kB
>>>> Bounce:                0 kB
>>>> WritebackTmp:          0 kB
>>>> CommitLimit:    98733456 kB
>>>> Committed_AS:     940272 kB
>>>> VmallocTotal:   135290290112 <tel:135290290112> kB
>>>> VmallocUsed:           0 kB
>>>> VmallocChunk:          0 kB
>>>> HardwareCorrupted:     0 kB
>>>> AnonHugePages:         0 kB
>>>> ShmemHugePages:        0 kB
>>>> ShmemPmdMapped:        0 kB
>>>> CmaTotal:              0 kB
>>>> CmaFree:               0 kB
>>>> HugePages_Total:      86
>>>> HugePages_Free:       86
>>>> HugePages_Rsvd:        0
>>>> HugePages_Surp:        0
>>>> Hugepagesize:    1048576 <tel:1048576> kB
>>>>  
>>>> Let me know if I am doing anything wrong:
>>>>  
>>>> This is VPP (master branch) with HEAD at:
>>>>  
>>>> commit 68b4da67deb2e8ca224bb5abaeb9dbc7ae8e378c (HEAD -> master, 
>>>> origin/master, origin/HEAD)
>>>> Author: Damjan Marion <damar...@cisco.com <mailto:damar...@cisco.com>>
>>>> Date:   Sun Sep 30 18:26:20 2018 +0200
>>>>  
>>>>     Numa-aware, growable physical memory allocator (pmalloc)
>>>>     
>>>>     Change-Id: Ic4c46bc733afae8bf0d8146623ed15633928de30
>>>>     Signed-off-by: Damjan Marion <damar...@cisco.com 
>>>> <mailto:damar...@cisco.com>>
>>>>  
>>>>  
>>>> Thank you
>>>> Sirshak Das
>>>> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
>>>> <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan 
>>>> Marion via Lists.Fd.Io
>>>> Sent: Tuesday, October 23, 2018 11:40 AM
>>>> To: vpp-dev <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>>
>>>> Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>>>> Subject: [vpp-dev] running VPP non-root broken
>>>>  
>>>>  
>>>> Folks,
>>>>  
>>>> Looks like my big physmem patch breaks non-root operation of VPP,  working 
>>>> on it
>>>> and It will take a bit of time so as a workaround "make test" can be run 
>>>> with sudo.
>>>>  
>>>> Let me know if any issues, and I will revert, but would like to avoid that 
>>>> due to the size of patch.
>>>>  
>>>> -- 
>>>> Damjan
>>>>  
>>>> IMPORTANT NOTICE: The contents of this email and any attachments are 
>>>> confidential and may also be privileged. If you are not the intended 
>>>> recipient, please notify the sender immediately and do not disclose the 
>>>> contents to any other person, use it for any purpose, or store or copy the 
>>>> information in any medium. Thank you.
>>> 
>>>  
>>> IMPORTANT NOTICE: The contents of this email and any attachments are 
>>> confidential and may also be privileged. If you are not the intended 
>>> recipient, please notify the sender immediately and do not disclose the 
>>> contents to any other person, use it for any purpose, or store or copy the 
>>> information in any medium. Thank you.
>> 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#10957): 
>> https://lists.fd.io/g/vpp-dev/message/10957
>>  <https://lists.fd.io/g/vpp-dev/message/10957>
>> Mute This Topic: 
>> https://lists.fd.io/mt/27570325/675056
>>  <https://lists.fd.io/mt/27570325/675056>
>> Group Owner: 
>> vpp-dev+ow...@lists.fd.io
>>  <mailto:vpp-dev+ow...@lists.fd.io>
>> Unsubscribe: 
>> https://lists.fd.io/g/vpp-dev/unsub
>>  <https://lists.fd.io/g/vpp-dev/unsub>  [
>> mvarl...@suse.de
>>  <mailto:mvarl...@suse.de>]
>> -=-=-=-=-=-=-=-=-=-=-=-
> -- 
> Marco V
> 
> SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
> HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10958): https://lists.fd.io/g/vpp-dev/message/10958
> Mute This Topic: https://lists.fd.io/mt/27570325/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10960): https://lists.fd.io/g/vpp-dev/message/10960
Mute This Topic: https://lists.fd.io/mt/27570325/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to