> On 14 Apr 2017, at 23:50, Ernst, Eric <eric.er...@intel.com> wrote:
> 
>> On Fri, Apr 14, 2017 at 11:07:43PM +0200, Damjan Marion wrote:
>> 
>> 
>> Sent from my iPhone
>> 
>>>> On 14 Apr 2017, at 18:29, Ernst, Eric <eric.er...@intel.com> wrote:
>>>> 
>>>>> On Fri, Apr 14, 2017 at 09:18:33AM -0700, Ernst, Eric wrote:
>>>>> On Fri, Apr 14, 2017 at 01:27:53PM +0200, Damjan Marion wrote:
>>>>> Eric,
>>>>> 
>>>>> VPP is not allocating hugepages directly. It just passes arguments to 
>>>>> DPDK EAL which does typical DPDK hugepage cherrypicking game.
>>>>> By default VPP requests 256M on each socket.
>>>>> 
>>>>> Issue you are reporting is very old and directly tied to DPDK behavior.
>>>>> 
>>>>> Have you tried to increase vm.max_map_count and kernel.shmmax?
>>>>> If not tale a look into src/vpp/conf/80-vpp.conf.
>>>> 
>>>> Damjan, 
>>>> 
>>>> Thanks for the quick reply.  max_map_count and kernel.shmmax did need
>>>> updating, but I'm still seeing same behavior.  What I ended up doing is 
>>>> making
>>>> edits directly to /etc/sysctl.d/80-vpp.conf in order to increase the huge 
>>>> pages 
>>>> as follows:
>>>> vm.nr_hugepages=2048
>>>> vm.max_map_count=4096
>>>> kernel.shmmax=4294967296
>>>> 
>>>> After a reboot, I'm still seeing the same behavior, though can see that 
>>>> 2048 hugepages
>>>> are free.  Are there other areas/setting that DPDK will check?
>>>> 
>>>> Through /etc/vpp/startup.conf, I have DPDK grabbing 1M for each socket:
>>>> socket-mem 1024,1024
>>> 
>>> Correction; 1G per socket.
>> 
>> What is your motivation for doing that?
>> 
> 
> I thought this was pretty standard for DPDK.  When testing with ovs-dpdk, this
> was the configuration I used, albeit only on the single socket. I also used
> socket-mem 1024,0 with the same effect.

it will just allocate more memory which will never be used....
I suggest that you use defaults unless you need more buffers, in that case you 
also need to increase num-mbufs.


> 
>>> 
>>>> 
>>>> Thanks,
>>>> Eric
>>>> 
>>>> 
>>>> 
>>>>> 
>>>>> That typically helps…
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Damjan
>>>>> 
>>>>> 
>>>>>> On 13 Apr 2017, at 22:53, Ernst, Eric <eric.er...@intel.com> wrote:
>>>>>> 
>>>>>> I’d like to better understand how hugepages are allocated at boot when 
>>>>>> vpp is not started, as well as what happens what vpp is started (ie, 
>>>>>> systemctl start vpp).
>>>>>> 
>>>>>> The reason I ask is that I’m running into an issue with hugepage 
>>>>>> allocation changes causing VPP to fail.  Whereas 1024 2MB pages is the 
>>>>>> default, I find that as I run more vhost-user VMs, I need to back them 
>>>>>> with hugepages, and need more pages.  When doing this and then 
>>>>>> starting/restarting VPP, I am seeing failures. 
>>>>>> 
>>>>>> Any tips?  Has anyone else seen this?
>>>>>> 
>>>>>> Thanks,
>>>>>> Eric
>>>>>> 
>>>>>> 
>>>>>> In more detail, 
>>>>>> 
>>>>>> Scenario #1:
>>>>>> 1.       Assuming VPP doesn’t start by default (I had run systemctl 
>>>>>> disable vpp on a prior boot)
>>>>>> 2.       Boot system under test
>>>>>> 3.       Verify via /proc/meminfo that 1024 huge pages have been 
>>>>>> allocated and 1024 are free
>>>>>> 4.       Start VPP (systemctl start vpp), see that free pool goes down 
>>>>>> to 768, as expected and VPP runs without issue
>>>>>> 
>>>>>> Scenario #2:
>>>>>> 1.        Assuming VPP doesn’t start by default (I had run systemctl 
>>>>>> disable vpp on a prior boot)
>>>>>> 2.       Boot system under test
>>>>>> 3.       Adjust number of hugepages from 1024 to 8192 (sysctl -w 
>>>>>> vm.nr_hugepages=8192)
>>>>>> 4.       Verify via /proc/meminfo that 8192 huge pages have been 
>>>>>> allocated and are free
>>>>>> 5.       Start VPP (systemctl start vpp), see that VPP fails to start 
>>>>>> with log shown below at [1], but in summary an EAL failure to allocate 
>>>>>> memory
>>>>>> 
>>>>>> Scenario #3:
>>>>>> 1.       Same as scenario #1 to start.  After VPP is up and running,….
>>>>>> 2.       Adjust number of huge pages from 1024 ->  8192, noting that the 
>>>>>> number of pages moves to 8192, of which some are still being used by VPP
>>>>>> 3.       You can add more vhost-user interfaces without issue, and VPP 
>>>>>> and VMs are functional.
>>>>>> 4.       Restart VPP (systemctl restart vpp)
>>>>>> 5.       Note that now VPP fails to start, though there are still _many_ 
>>>>>> pages free.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> [1] Snippet of Error log:
>>>>>> Apr 13 13:43:41 eernstworkstation systemd[1]: Starting vector packet 
>>>>>> processing engine...
>>>>>> Apr 13 13:43:41 eernstworkstation systemd[1]: Started vector packet 
>>>>>> processing engine.
>>>>>> Apr 13 13:43:41 eernstworkstation vpp[5024]: vlib_plugin_early_init:213: 
>>>>>> plugin path /usr/lib/vpp_plugins
>>>>>> Apr 13 13:43:41 eernstworkstation vpp[5024]: /usr/bin/vpp[5024]: 
>>>>>> vlib_pci_bind_to_uio: Skipping PCI device 0000:05:00.0 as host interface 
>>>>>> eth0 is up
>>>>>> Apr 13 13:43:41 eernstworkstation /usr/bin/vpp[5024]: 
>>>>>> vlib_pci_bind_to_uio: Skipping PCI device 0000:05:00.0 as host interface 
>>>>>> eth0 is up
>>>>>> Apr 13 13:43:41 eernstworkstation vpp[5024]: /usr/bin/vpp[5024]: 
>>>>>> vlib_pci_bind_to_uio: Skipping PCI device 0000:05:00.1 as host interface 
>>>>>> eth1 is up
>>>>>> Apr 13 13:43:41 eernstworkstation /usr/bin/vpp[5024]: 
>>>>>> vlib_pci_bind_to_uio: Skipping PCI device 0000:05:00.1 as host interface 
>>>>>> eth1 is up
>>>>>> Apr 13 13:43:41 eernstworkstation vpp[5024]: EAL: Detected 32 lcore(s)
>>>>>> Apr 13 13:43:41 eernstworkstation vpp[5024]: EAL: No free hugepages 
>>>>>> reported in hugepages-1048576kB
>>>>>> Apr 13 13:43:41 eernstworkstation vpp[5024]: EAL: Probing VFIO support...
>>>>>> Apr 13 13:43:41 eernstworkstation vnet[5024]: EAL: Probing VFIO 
>>>>>> support...
>>>>>> Apr 13 13:43:41 eernstworkstation sudo[5038]:   eernst : TTY=pts/1 ; 
>>>>>> PWD=/home/eernst ; USER=root ; COMMAND=/bin/journalctl
>>>>>> Apr 13 13:43:41 eernstworkstation sudo[5038]: pam_unix(sudo:session): 
>>>>>> session opened for user root by eernst(uid=0)
>>>>>> Apr 13 13:43:42 eernstworkstation vpp[5024]: EAL: Cannot get a virtual 
>>>>>> area: Cannot allocate memory
>>>>>> Apr 13 13:43:42 eernstworkstation vpp[5024]: EAL: Failed to remap 2 MB 
>>>>>> pages
>>>>>> Apr 13 13:43:42 eernstworkstation vpp[5024]: PANIC in rte_eal_init():
>>>>>> Apr 13 13:43:42 eernstworkstation vpp[5024]: Cannot init memory
>>>>>> Apr 13 13:43:42 eernstworkstation vnet[5024]: EAL: Cannot get a virtual 
>>>>>> area: Cannot allocate memory
>>>>>> Apr 13 13:43:42 eernstworkstation vnet[5024]: EAL: Failed to remap 2 MB 
>>>>>> pages
>>>>>> Apr 13 13:43:42 eernstworkstation vnet[5024]: PANIC in rte_eal_init():
>>>>>> Apr 13 13:43:42 eernstworkstation vnet[5024]: Cannot init memory
>>>>>> Apr 13 13:43:43 eernstworkstation systemd[1]: vpp.service: Main process 
>>>>>> exited, code=dumped, status=6/ABRT
>>>>>> Apr 13 13:43:43 eernstworkstation systemd[1]: vpp.service: Unit entered 
>>>>>> failed state.
>>>>>> Apr 13 13:43:43 eernstworkstation systemd[1]: vpp.service: Failed with 
>>>>>> result 'core-dump'.
>>>>>> Apr 13 13:43:43 eernstworkstation systemd[1]: vpp.service: Service 
>>>>>> hold-off time over, scheduling restart.
>>>>>> Apr 13 13:43:43 eernstworkstation systemd[1]: Stopped vector packet 
>>>>>> processing engine.
>>>>>> Apr 13 13:43:43 eernstworkstation systemd[1]: vpp.service: Start request 
>>>>>> repeated too quickly.
>>>>>> Apr 13 13:43:43 eernstworkstation systemd[1]: Failed to start vector 
>>>>>> packet processing engine.
>>>>>> 
>>>>>> _______________________________________________
>>>>>> vpp-dev mailing list
>>>>>> vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>>>>>> https://lists.fd.io/mailman/listinfo/vpp-dev 
>>>>>> <https://lists.fd.io/mailman/listinfo/vpp-dev>
>>>> _______________________________________________
>>>> vpp-dev mailing list
>>>> vpp-dev@lists.fd.io
>>>> https://lists.fd.io/mailman/listinfo/vpp-dev
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to