Re: [vpp-dev] Is VppCom suitable for this scenario
Hi Satya, Glad it helped! You may want to use svm queues or message queues similarly to how the session queue node does. For instance, see how messages are dequeued in session_queue_node_fn. Regards, Florin > On Jan 6, 2020, at 10:45 PM, Satya Murthy wrote: > > Hi Florin, > > Thank you very much for quick inputs. I have gone through your youtube video > from kubecon and it cleared lot of my doubts. > You presented it in a very clear manner. > > As you rightly pointed out, VppCom will be a overhead for our use case. > All we need is just a shared memory communication to send and receive bigger > messages. > Memif was not a candidate for this, since it will pose message size > restrictions upto 64K. > > In this case, what framework we can use to send/recv messages from VPP > workers across shared memory. > Can we use SVM queues directly and get the message into our custom VPP plugin > and process it > ( in case of VPP receiving message from control plane app ) > > Any example code that already does this ? If so, can you please point this to > us. > -- > Thanks & Regards, > Murthy -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#15073): https://lists.fd.io/g/vpp-dev/message/15073 > Mute This Topic: https://lists.fd.io/mt/69461619/675152 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [fcoras.li...@gmail.com] > -=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15075): https://lists.fd.io/g/vpp-dev/message/15075 Mute This Topic: https://lists.fd.io/mt/69461619/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] FEATURE.yaml
Hi, As people add the feature description to their features (FEATURE.yaml), a few errors have crept in. The JSON schema / YAML definition is quite fickle. I added the YAML validator to the checkstyle target, but unfortunately that hasn't made it into the verify-checkstyle jenkins job (yet). Please ensure you do a "make checkstyle", before committing. Cheers, Ole-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15076): https://lists.fd.io/g/vpp-dev/message/15076 Mute This Topic: https://lists.fd.io/mt/69498508/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] vpp assert error whtn nginx start with ldp
Hi, Not entirely sure what’s happening there. I just tried nginx with master latest and binding 4 workers seems to work. In your case it looks as if a listening session associated to an app listener was freed. Not sure how that could happen. Anything special about your nginx or vcl configuration? Regards, Florin > On Jan 6, 2020, at 10:38 PM, jiangxiaom...@outlook.com wrote: > > VPP crash when start nginx start with ldp. vpp code is master > 78565f38e8436dae9cd3a891b5e5d929209c87f9, > The crash stack is below: Anyone has any solution? > > DBGvpp# 0: vl_api_memclnt_delete_t_handler:277: Stale clnt delete index > 16777215 old epoch 255 cur epoch 0 > > 0: /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320 > (session_get_from_handle) assertion `! pool_is_free > (smm->wrk[thread_index].sessions, _e)' fails > > > > Program received signal SIGABRT, Aborted. > > 0x74a7 in __GI_raise (sig=sig@entry=6) at > ../nptl/sysdeps/unix/sysv/linux/raise.c:55 > > 55 return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig); > > (gdb) bt > > #0 0x74a7 in __GI_raise (sig=sig@entry=6) at > ../nptl/sysdeps/unix/sysv/linux/raise.c:55 > > #1 0x74a34a28 in __GI_abort () at abort.c:90 > > #2 0x00407458 in os_panic () at > /home/dev/code/net-base/build/vpp/src/vpp/vnet/main.c:355 > > #3 0x7587ad1f in debugger () at > /home/dev/code/net-base/build/vpp/src/vppinfra/error.c:84 > > #4 0x7587b0ee in _clib_error (how_to_die=2, function_name=0x0, > line_number=0, fmt=0x7772b0c8 "%s:%d (%s) assertion `%s' fails") at > /home/dev/code/net-base/build/vpp/src/vppinfra/error.c:143 > > #5 0x773da25f in session_get_from_handle (handle=2) at > /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320 > > #6 0x773da330 in listen_session_get_from_handle (handle=2) at > /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:548 > > #7 0x773dac6b in app_listener_lookup (app=0x7fffd72f2188, > sep_ext=0x7fffdc84fc80) at > /home/dev/code/net-base/build/vpp/src/vnet/session/application.c:122 > > #8 0x773de10d in vnet_listen (a=0x7fffdc84fc80) at > /home/dev/code/net-base/build/vpp/src/vnet/session/application.c:979 > > #9 0x773c33a9 in session_mq_listen_handler (data=0x13007fb89) at > /home/dev/code/net-base/build/vpp/src/vnet/session/session_node.c:62 > > #10 0x77bb4f8a in vl_api_rpc_call_t_handler (mp=0x13007fb70) at > /home/dev/code/net-base/build/vpp/src/vlibmemory/vlib_api.c:519 > > #11 0x77bc8dfc in vl_msg_api_handler_with_vm_node (am=0x77dd9e40 > , vlib_rp=0x130021000, the_msg=0x13007fb70, > vm=0x766c0640 , node=0x7fffdc847000, is_private=0 > '\000') at /home/dev/code/net-base/build/vpp/src/vlibapi/api_shared.c:603 > > #12 0x77b9815c in vl_mem_api_handle_rpc (vm=0x766c0640 > , node=0x7fffdc847000) at > /home/dev/code/net-base/build/vpp/src/vlibmemory/memory_api.c:748 > > #13 0x77bb3e05 in vl_api_clnt_process (vm=0x766c0640 > , node=0x7fffdc847000, f=0x0) at > /home/dev/code/net-base/build/vpp/src/vlibmemory/vlib_api.c:326 > > #14 0x7641f1f5 in vlib_process_bootstrap (_a=140736887348176) at > /home/dev/code/net-base/build/vpp/src/vlib/main.c:1475 > > #15 0x7589aef4 in clib_calljmp () at > /home/dev/code/net-base/build/vpp/src/vppinfra/longjmp.S:123 > > #16 0x7fffdc2d5ba0 in ?? () > > #17 0x7641f2fd in vlib_process_startup (vm=0x7641fca0 > , p=0x7fffdc2d5ca0, f=0x) at > /home/dev/code/net-base/build/vpp/src/vlib/main.c:1497 > > Backtrace stopped: previous frame inner to this frame (corrupt stack?) > > (gdb) up 5 > > #5 0x773da25f in session_get_from_handle (handle=2) at > /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320 > > 320 return pool_elt_at_index (smm->wrk[thread_index].sessions, > session_index); > > (gdb) print thread_index > > $1 = 0 > > (gdb) info thread > > Id Target Id Frame > > 3Thread 0x7fffb4e51700 (LWP 101019) "vpp_wk_0" 0x764188e6 in > vlib_worker_thread_barrier_check () at > /home/dev/code/net-base/build/vpp/src/vlib/threads.h:425 > > 2Thread 0x7fffb5652700 (LWP 101018) "eal-intr-thread" > 0x74afbe63 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81 > > * 1Thread 0x77fd87c0 (LWP 101001) "vpp_main" 0x74a7 in > __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:55 > > (gdb) print session_index > > $2 = 2 > > > > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#15072): https://lists.fd.io/g/vpp-dev/message/15072 > Mute This Topic: https://lists.fd.io/mt/69497840/675152 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [fcoras.li...@gmail.com] > -=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all m
[vpp-dev] Support for VPP in RedHat Enterprise Linux
Hi, I am trying to build and run VPP (stable/1908) on RHEL platform. *[root@trialrh75 vpp]# hostnamectl* Static hostname: trialrh75.localdomain Icon name: computer-vm Chassis: vm Machine ID: d9c32d7446fd4b608142b6f7414fad72 Boot ID: 0d94cec548b4455ebcf3adf8cfd5c87b Virtualization: kvm Operating System: Red Hat Enterprise Linux CPE OS Name: cpe:/o:redhat:enterprise_linux:7.7:GA:server Kernel: Linux 3.10.0-862.el7.x86_64 Architecture: x86-64 *[root@trialrh75 vpp]# cat /etc/redhat-release* Red Hat Enterprise Linux Server release 7.7 (Maipo) Could anyone make sure whether VPP is supported in this platform ? Regards Muthukumar S -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15078): https://lists.fd.io/g/vpp-dev/message/15078 Mute This Topic: https://lists.fd.io/mt/69500454/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Support for VPP in RedHat Enterprise Linux
Hi, > I am trying to build and run VPP (stable/1908) on RHEL platform. [...] > Could anyone make sure whether VPP is supported in this platform ? We publish RPM for CentOS/7 here: https://packagecloud.io/fdio/release They should work for RHEL/7. ben -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15079): https://lists.fd.io/g/vpp-dev/message/15079 Mute This Topic: https://lists.fd.io/mt/69500454/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP 20.01 api freeze is tomorrow
Hello all, A gentle reminder - tomorrow is 8th of January, which means API freeze - no more commits that change anything .api - until we pull the stable/2001 branch. If you have any work that needs special treatment, please get in touch. --a (your friendly 20.01 release manager)-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15080): https://lists.fd.io/g/vpp-dev/message/15080 Mute This Topic: https://lists.fd.io/mt/69501735/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Coverity run FAILED as of 2020-01-07 14:00:25 UTC
Coverity run failed today. Current number of outstanding issues are 2 Newly detected: 0 Eliminated: 0 More details can be found at https://scan.coverity.com/projects/fd-io-vpp/view_defects -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15081): https://lists.fd.io/g/vpp-dev/message/15081 Mute This Topic: https://lists.fd.io/mt/69502129/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] FDIO Maintenance - 2020-02-05 1900 UTC to 2400 UTC
*Please let us know as soon as possible if this maintenance conflicts with your project.* *What:* * Jenkins o OS and security updates o Upgrade to 2.204.1 o Plugin updates * Nexus o OS updates * Jira o OS updates * Gerrit o OS updates * Sonar o OS updates * OpenGrok o OS updates *When: *2020-02-05 1900 UTC to 2400 UTC *Impact:* Maintenance will require a reboot of each FD.io system. Jenkins will be placed in shutdown mode at 1800 UTC. Please let us know if specific jobs cannot be aborted. The following systems will be unavailable during the maintenance window: * Jenkins sandbox * Jenkins production * Nexus * Jira * Gerrit * Sonar * OpenGrok -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15082): https://lists.fd.io/g/vpp-dev/message/15082 Mute This Topic: https://lists.fd.io/mt/69502832/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR
Hello Everyone, While working with Address sanitizer I am facing compilation error in vpp master branch. Can anybody guide me please . vpp/src/vppinfra/dlmalloc.c:8: vpp/src/vppinfra/dlmalloc.c: In function ‘mspace_get_aligned’: vpp/src/vppinfra/clib.h:226:1: error: inlining failed in call to always_inline ‘max_pow2’: function attribute mismatch max_pow2 (uword x) ^~~~ vpp/src/vppinfra/dlmalloc.c:4256:9: note: called from here align = max_pow2 (align); Thanks, Chetan Bhasin -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15083): https://lists.fd.io/g/vpp-dev/message/15083 Mute This Topic: https://lists.fd.io/mt/69505770/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR
> While working with Address sanitizer I am facing compilation error in vpp > master branch. Can you share on which sha1 you are, which distro and which compiler and compiler version? Ben -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15084): https://lists.fd.io/g/vpp-dev/message/15084 Mute This Topic: https://lists.fd.io/mt/69505770/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] FEATURE.yaml
Folks, This issue has now been resolved by [0] and the checkstyle & checkfeature Makefile targets have been verified to be executing and rejecting invalid FEATURE.yaml files [1] Thanks, -daw- [0] https://gerrit.fd.io/r/c/ci-management/+/24226 [1] https://gerrit.fd.io/r/c/vpp/+/24227 On 1/7/2020 3:33 AM, Ole Troan wrote: Hi, As people add the feature description to their features (FEATURE.yaml), a few errors have crept in. The JSON schema / YAML definition is quite fickle. I added the YAML validator to the checkstyle target, but unfortunately that hasn't made it into the verify-checkstyle jenkins job (yet). Please ensure you do a "make checkstyle", before committing. Cheers, Ole -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15076): https://lists.fd.io/g/vpp-dev/message/15076 Mute This Topic: https://lists.fd.io/mt/69498508/675079 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dwallac...@gmail.com] -=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15085): https://lists.fd.io/g/vpp-dev/message/15085 Mute This Topic: https://lists.fd.io/mt/69498508/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] FEATURE.yaml
Thanks Dave! Cheers Ole > On 7 Jan 2020, at 22:14, Dave Wallace wrote: > > > Folks, > > This issue has now been resolved by [0] and the checkstyle & checkfeature > Makefile targets have been verified to be executing and rejecting invalid > FEATURE.yaml files [1] > > Thanks, > -daw- > > [0] https://gerrit.fd.io/r/c/ci-management/+/24226 > [1] https://gerrit.fd.io/r/c/vpp/+/24227 > >> On 1/7/2020 3:33 AM, Ole Troan wrote: >> Hi, >> >> As people add the feature description to their features (FEATURE.yaml), a >> few errors have crept in. >> The JSON schema / YAML definition is quite fickle. >> >> I added the YAML validator to the checkstyle target, but unfortunately that >> hasn't made it into the verify-checkstyle jenkins job (yet). >> Please ensure you do a "make checkstyle", before committing. >> >> Cheers, >> Ole >> >> >> -=-=-=-=-=-=-=-=-=-=-=- >> Links: You receive all messages sent to this group. >> >> View/Reply Online (#15076): https://lists.fd.io/g/vpp-dev/message/15076 >> Mute This Topic: https://lists.fd.io/mt/69498508/675079 >> Group Owner: vpp-dev+ow...@lists.fd.io >> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dwallac...@gmail.com] >> -=-=-=-=-=-=-=-=-=-=-=- > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15086): https://lists.fd.io/g/vpp-dev/message/15086 Mute This Topic: https://lists.fd.io/mt/69498508/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet
Hi steven: Thank you for your reply! I followed your *advice(3)* , and made some attempts. I create *three* startup config files of vpp: The first one is named " *startup.conf.smp* ", the second one is named " *startup.conf* "(my config file). And The third one is named " *startup.conf.ok* ", it just delete "uio-driver vfio-pci" on the basis of "startup.conf". - *startup.conf.smp* *unix { interactive }* -- *startup.conf* --- unix { nodaemon log /var/log/vpp/vpp.log full-coredump cli-listen /run/vpp/cli.sock gid vpp } api-trace { on } api-segment { gid vpp } socksvr { default } cpu { main-core 30 corelist-workers 26,28 workers 2 } dpdk { dev default { num-rx-queues 1 num-tx-queues 2 } dev :3b:00.0 dev :3b:00.1 #dev :3b:00.2 #dev :3b:00.3 *uio-driver vfio-pci* } -- *startup.conf.ok* --- unix { nodaemon log /var/log/vpp/vpp.log full-coredump cli-listen /run/vpp/cli.sock gid vpp } api-trace { on } api-segment { gid vpp } socksvr { default } cpu { main-core 30 corelist-workers 26,28 workers 2 } dpdk { dev default { num-rx-queues 1 num-tx-queues 2 } dev :3b:00.0 dev :3b:00.1 #dev :3b:00.2 #dev :3b:00.3 *#uio-driver vfio-pci (just modify here on the basis of startup.conf--my config) * *# @steven, do you know why this option makes the difference?@ * *}* - I'm not familiar with *testpmd* , but I will take some time to find out how it works. (1)When I use “ *startup.conf.smp* ", and follow the operation sequence below after Centos startup, *it seems ok* : [root@localhost ~]# *modprobe vfio-pci* [root@localhost ~]# lsmod | grep vfio vfio_pci 41412 2 vfio_iommu_type1 22440 0 vfio 32657 8 vfio_iommu_type1,vfio_pci irqbypass 13503 4 kvm,vfio_pci [root@localhost ~]# */usr/bin/numactl --cpubind=0 --membind=0 /usr/bin/vpp -c /etc/vpp/startup.conf.smp* ... vpp# show pci Address Sock VID:PID Link Speed Driver Product Name Vital Product Data ... *:3b:00.0 0 8086:1572 8.0 GT/s x8 vfio-pci XL710 40GbE Controller RV: 0x 86* *:3b:00.1 0 8086:1572 8.0 GT/s x8 vfio-pci XL710 40GbE Controller RV: 0x 86* *:3b:00.2 0 8086:1572 8.0 GT/s x8 vfio-pci XL710 40GbE Controller RV: 0x 86* *:3b:00.3 0 8086:1572 8.0 GT/s x8 vfio-pci XL710 40GbE Controller RV: 0x 86* vpp# show interface Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count *TenGigabitEthernet3b/0/0 1 down 9000/0/0/0* *TenGigabitEthernet3b/0/1 2 down 9000/0/0/0* *TenGigabitEthernet3b/0/2 3 down 9000/0/0/0* *TenGigabitEthernet3b/0/3 4 down 9000/0/0/0* local0 0 down 0/0/0/0 vpp# show log 2020/01/08 10:35:38:001 warn dpdk Unsupported PCI device 0x14e4:0x165f found at PCI address :18:00.0 2020/01/08 10:35:38:017 warn dpdk Unsupported PCI device 0x14e4:0x165f found at PCI address :18:00.1 2020/01/08 10:35:38:032 warn dpdk Unsupported PCI device 0x14e4:0x165f found at PCI address :19:00.0 2020/01/08 10:35:38:076 warn dpdk Unsupported PCI device 0x14e4:0x165f found at PCI address :19:00.1 2020/01/08 10:35:39:447 warn dpdk EAL init args: -c 2 -n 4 --in-memory --file-prefix vpp --master-lcore 1 2020/01/08 10:35:40:682 notice dpdk EAL: Detected 32 lcore(s) 2020/01/08 10:35:40:682 notice dpdk EAL: Detected 2 NUMA nodes 2020/01/08 10:35:40:682 notice dpdk EAL: Some devices want iova as va but pa will be used because.. EAL: vfio-noiommu mode configured 2020/01/08 10:35:40:682 notice dpdk EAL: No available hugepages reported in hugepages-1048576kB 2020/01/08 10:35:40:682 notice dpdk EAL: No free hugepages reported in hugepages-1048576kB 2020/01/08 10:35:40:682 notice dpdk EAL: No free hugepages reported in hugepages-1048576kB 2020/01/08 10:35:40:682 notice dpdk EAL: No available hugepages reported in hugepages-1048576kB 2020/01/08 10:35:40:682 notice dpdk EAL: Probing VFIO support... 2020/01/08 10:35:40:682 notice dpdk EAL: VFIO support initialized 2020/01/08 10:35:40:682 notice dpdk EAL: WARNING! Base virtual address hint (0xa80001000 != 0x7f4f4000) not respected! 2020/01/08 10:35:40:682 notice dpdk EAL: This may cause issues with mapping memory into secondary processes *@Yichen, when I used "dmesg | grep Virtualization", nothing was returned:* *[root@localhost ~]# dmesg | grep Virtualization* *[root@localhost ~]#* *I don't do performance test or other related tests now, so I don't know if there are any other problems.* (2)When I us
Re: [vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR
Hi Benoit, Please find the details as below : 1) As per git log , last check-in is : commit 22e108d9a94a9ccc0c31c2479740c57cf2a09126 Author: Ole Troan Date: Tue Jan 7 09:30:05 2020 +0100 Change-Id: I8bd6bb95135dc280565f357aa5850292f66979a1 2) gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) 3) -bash-4.2$ uname -a Linux bfs-dl360g10-14-vm25 3.10.0-514.16.1.el7.x86_64 #1 SMP Fri Mar 10 13:12:32 EST 2017 x86_64 x86_64 x86_64 GNU/Linux Thanks, Chetan Bhasin On Tue, Jan 7, 2020 at 10:22 PM Benoit Ganne (bganne) wrote: > > While working with Address sanitizer I am facing compilation error in vpp > > master branch. > > Can you share on which sha1 you are, which distro and which compiler and > compiler version? > > Ben > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15089): https://lists.fd.io/g/vpp-dev/message/15089 Mute This Topic: https://lists.fd.io/mt/69505770/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet
So you now know what command in the dpdk section that dpdk doesn’t like. Try adding “log-level debug” in the dpdk section of startup.conf to see if you can find more helpful messages in “vppctl show log” from dpdk why it fails to probe the NIC. Steven From: on behalf of Gencli Liu <18600640...@163.com> Date: Tuesday, January 7, 2020 at 7:42 PM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet Hi steven: Thank you for your reply! I followed your advice(3), and made some attempts. I create three startup config files of vpp: The first one is named "startup.conf.smp", the second one is named "startup.conf"(my config file). And The third one is named "startup.conf.ok", it just delete "uio-driver vfio-pci" on the basis of "startup.conf". - startup.conf.smp unix { interactive } -- startup.conf --- unix { nodaemon log /var/log/vpp/vpp.log full-coredump cli-listen /run/vpp/cli.sock gid vpp } api-trace { on } api-segment { gid vpp } socksvr { default } cpu { main-core 30 corelist-workers 26,28 workers 2 } dpdk { dev default { num-rx-queues 1 num-tx-queues 2 } dev :3b:00.0 dev :3b:00.1 #dev :3b:00.2 #dev :3b:00.3 uio-driver vfio-pci } -- startup.conf.ok --- unix { nodaemon log /var/log/vpp/vpp.log full-coredump cli-listen /run/vpp/cli.sock gid vpp } api-trace { on } api-segment { gid vpp } socksvr { default } cpu { main-core 30 corelist-workers 26,28 workers 2 } dpdk { dev default { num-rx-queues 1 num-tx-queues 2 } dev :3b:00.0 dev :3b:00.1 #dev :3b:00.2 #dev :3b:00.3 #uio-driver vfio-pci (just modify here on the basis of startup.conf--my config) # @steven, do you know why this option makes the difference?@ } - I'm not familiar with testpmd, but I will take some time to find out how it works. (1)When I use “startup.conf.smp", and follow the operation sequence below after Centos startup, it seems ok: [root@localhost ~]# modprobe vfio-pci [root@localhost ~]# lsmod | grep vfio vfio_pci 41412 2 vfio_iommu_type1 22440 0 vfio 32657 8 vfio_iommu_type1,vfio_pci irqbypass 13503 4 kvm,vfio_pci [root@localhost ~]#/usr/bin/numactl --cpubind=0 --membind=0 /usr/bin/vpp -c /etc/vpp/startup.conf.smp ... vpp# show pci Address Sock VID:PID Link Speed Driver Product Name Vital Product Data ... :3b:00.0 0 8086:1572 8.0 GT/s x8 vfio-pciXL710 40GbE Controller RV: 0x 86 :3b:00.1 0 8086:1572 8.0 GT/s x8 vfio-pciXL710 40GbE Controller RV: 0x 86 :3b:00.2 0 8086:1572 8.0 GT/s x8 vfio-pciXL710 40GbE Controller RV: 0x 86 :3b:00.3 0 8086:1572 8.0 GT/s x8 vfio-pciXL710 40GbE Controller RV: 0x 86 vpp# show interface Name IdxState MTU (L3/IP4/IP6/MPLS) Counter Count TenGigabitEthernet3b/0/0 1 down 9000/0/0/0 TenGigabitEthernet3b/0/1 2 down 9000/0/0/0 TenGigabitEthernet3b/0/2 3 down 9000/0/0/0 TenGigabitEthernet3b/0/3 4 down 9000/0/0/0 local00 down 0/0/0/0 vpp# show log 2020/01/08 10:35:38:001 warn dpdk Unsupported PCI device 0x14e4:0x165f found at PCI address :18:00.0 2020/01/08 10:35:38:017 warn dpdk Unsupported PCI device 0x14e4:0x165f found at PCI address :18:00.1 2020/01/08 10:35:38:032 warn dpdk Unsupported PCI device 0x14e4:0x165f found at PCI address :19:00.0 2020/01/08 10:35:38:076 warn dpdk Unsupported PCI device 0x14e4:0x165f found at PCI address :19:00.1 2020/01/08 10:35:39:447 warn dpdk EAL init args: -c 2 -n 4 --in-memory --file-prefix vpp --master-lcore 1 2020/01/08 10:35:40:682 notice dpdk EAL: Detected 32 lcore(s) 2020/01/08 10:35:40:682 notice dpdk EAL: Detected 2 NUMA nodes 2020/01/08 10:35:40:682 notice dpdk EAL: Some devices want iova as va but pa will be used because.. EAL: vfio-noiommu mode configured 2020/01/08 10:35:40:682 notice dpdk EAL: No available hugepages reported in hugepages-1048576kB 2020/01/08 10:35:40:682 notice dpdk EAL: No free hugepages reported in hugepages-1048576kB 2020/01/08 10:35:40:682 notice dpdk EAL: No free hugepages reported in hugepages-1048576kB 2020/01/08 10:35:40:682 notice dpdk EAL: No available hugepages reported in hugepages-1048576kB 2020/01/08 10:35:40:682 notice dpdk EAL: Probing VFIO support... 2020/01/08 10:35:40:682 notice dpdk EAL: VFIO support initialized 2020/01/08 10:35:40:682 notice dpdk EAL: WARNING! Base virtual address hint (0xa80001000 != 0x7f4f4000) not res
Re: [vpp-dev] Compilation failure in master branch after enabling ENABLE_SANITIZE_ADDR
+ 1) make build-release (Issue is coming while compilation) 2) make build (Compilation successful.) On Wed, Jan 8, 2020 at 10:19 AM chetan bhasin wrote: > Hi Benoit, > > Please find the details as below : > > 1) As per git log , last check-in is : > commit 22e108d9a94a9ccc0c31c2479740c57cf2a09126 > Author: Ole Troan > Date: Tue Jan 7 09:30:05 2020 +0100 > Change-Id: I8bd6bb95135dc280565f357aa5850292f66979a1 > > 2) gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) > > 3) -bash-4.2$ uname -a > Linux bfs-dl360g10-14-vm25 3.10.0-514.16.1.el7.x86_64 #1 SMP Fri Mar 10 > 13:12:32 EST 2017 x86_64 x86_64 x86_64 GNU/Linux > > > Thanks, > Chetan Bhasin > > On Tue, Jan 7, 2020 at 10:22 PM Benoit Ganne (bganne) > wrote: > >> > While working with Address sanitizer I am facing compilation error in >> vpp >> > master branch. >> >> Can you share on which sha1 you are, which distro and which compiler and >> compiler version? >> >> Ben >> > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15091): https://lists.fd.io/g/vpp-dev/message/15091 Mute This Topic: https://lists.fd.io/mt/69505770/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] install-dep fails in RedHat (RHEL)
Hi, I have done the following steps to build and install VPP in RHEL. * git clone https://gerrit.fd.io/r/vpp -b stable/1908 * cd vpp/ * make install-dep This step fails with error for packages * ninja-build * mbedtls * cmake3 These packages are not available for RHEL/7 I manually installed each package with src.rpm, even after this the build fails with unpredictable error --- installing rdma-core 25.0 - log: /root/vpp/build-root/build-vpp-native/external/rdma-core.install.log mkdir -p /root/vpp/build-root/install-vpp-native/external tar -C /root/vpp/build-root/build-vpp-native/external/build-rdma-core --xform='s|/statics/|/|' -hc include/infiniband/verbs.h include/infiniband/verbs_api.h include/infiniband/ib_user_ioctl_verbs.h include/rdma/ib_user_verbs.h lib/statics/libibverbs.a util/librdma_util.a lib/statics/libmlx5.a | tar -C /root/vpp/build-root/install-vpp-native/external -xv > /root/vpp/build-root/build-vpp-native/external/rdma-core.install.log tar: include/infiniband/verbs.h: Cannot stat: No such file or directory tar: include/infiniband/verbs_api.h: Cannot stat: No such file or directory tar: include/infiniband/ib_user_ioctl_verbs.h: Cannot stat: No such file or directory tar: include/rdma/ib_user_verbs.h: Cannot stat: No such file or directory tar: lib/statics/libibverbs.a: Cannot stat: No such file or directory tar: util/librdma_util.a: Cannot stat: No such file or directory tar: lib/statics/libmlx5.a: Cannot stat: No such file or directory tar: Exiting with failure status due to previous errors mv -v /root/vpp/build-root/install-vpp-native/external/util/librdma_util.a /root/vpp/build-root/install-vpp-native/external/lib >> /root/vpp/build-root/build-vpp-native/external/rdma-core.install.log mv: cannot stat ‘/root/vpp/build-root/install-vpp-native/external/util/librdma_util.a’: No such file or directory make[3]: *** [/root/vpp/build-root/build-vpp-native/external/.rdma-core.install.ok] Error 1 make[3]: Leaving directory `/root/vpp/build/external' make[2]: *** [ebuild-install] Error 2 make[2]: Leaving directory `/root/vpp/build/external' make[1]: *** [external-install] Error 2 make[1]: Leaving directory `/root/vpp/build-root' make: *** [build-release] Error 2 Anyone experience similar issues? Please assist me to resolve this. Thanks, Vijayalakshmi -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15092): https://lists.fd.io/g/vpp-dev/message/15092 Mute This Topic: https://lists.fd.io/mt/69523613/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-