Re: [vpp-dev] Build failing on AArch64
Sirshak, Can you try adding: DEPENDS api_headers Inside add_vpp_executable(vpp_api_test ENABLE_EXPORTS in src/vat/CMakeLists.txt Cheers, Ole > On 28 Nov 2018, at 06:32, Sirshak Das wrote: > > It takes 3 iterations to get to a proper build: > > First Iteration: > > FAILED: vat/CMakeFiles/vpp_api_test.dir/types.c.o > ccache /usr/lib/ccache/cc -DHAVE_MEMFD_CREATE -Dvpp_api_test_EXPORTS > -I/home/sirdas/code/commitc/vpp/src -I. -Iinclude -march=armv8-a+crc -g -O2 > -DFORTIFY_SOURCE=2 -fstack-protector -fPIC -Werror > -Wno-address-of-packed-member -pthread -MD -MT > vat/CMakeFiles/vpp_api_test.dir/types.c.o -MF > vat/CMakeFiles/vpp_api_test.dir/types.c.o.d -o > vat/CMakeFiles/vpp_api_test.dir/types.c.o -c > /home/sirdas/code/commitc/vpp/src/vat/types.c > In file included from > /home/sirdas/code/commitc/vpp/src/vpp/api/vpe_all_api_h.h:25, >from /home/sirdas/code/commitc/vpp/src/vpp/api/types.h:20, >from /home/sirdas/code/commitc/vpp/src/vat/types.c:19: > /home/sirdas/code/commitc/vpp/src/vnet/vnet_all_api_h.h:32:10: fatal error: > vnet/bonding/bond.api.h: No such file or directory > #include > ^ > > Second Iteration: > > FAILED: vat/CMakeFiles/vpp_api_test.dir/types.c.o > ccache /usr/lib/ccache/cc -DHAVE_MEMFD_CREATE -Dvpp_api_test_EXPORTS > -I/home/sirdas/code/commitc/vpp/src -I. -Iinclude -march=armv8-a+crc -g -O2 > -DFORTIFY_SOURCE=2 -fstack-protector -fPIC -Werror > -Wno-address-of-packed-member -pthread -MD -MT > vat/CMakeFiles/vpp_api_test.dir/types.c.o -MF > vat/CMakeFiles/vpp_api_test.dir/types.c.o.d -o > vat/CMakeFiles/vpp_api_test.dir/types.c.o -c > /home/sirdas/code/commitc/vpp/src/vat/types.c > In file included from /home/sirdas/code/commitc/vpp/src/vpp/api/types.h:20, >from /home/sirdas/code/commitc/vpp/src/vat/types.c:19: > /home/sirdas/code/commitc/vpp/src/vpp/api/vpe_all_api_h.h:32:10: fatal error: > vpp/stats/stats.api.h: No such file or directory > #include > ^~~ > compilation terminated. > [142/1163] Building C object vat/CMakeFiles/vpp_api_test.dir/api_format.c.o^C > ninja: build stopped: interrupted by user. > Makefile:691: recipe for target 'vpp-build' failed > make[1]: *** [vpp-build] Interrupt > Makefile:366: recipe for target 'build-release' failed > make: *** [build-release] Interrupt > > Had to kill as it was stuck. > > Third Interation: > > Finally it got built properly. > > This is a manageble error for dev purposes but will give lot false > negatives for CI. > Anyone familiar with VAT please help. > > > Thank you > Sirshak Das > > Ole Troan writes: > >> Juraj, >> >> Seems like a dependency problem. VAT depends on a generated file that hasn’t >> been generated yet. >> >> Ole >> >>> On 27 Nov 2018, at 18:04, Juraj Linkeš wrote: >>> >>> Hi Ole, >>> >>> I'm hitting the same issue. >>> >>> Running the build with V=2 doesn't actually produce more >>> output. >>> >>> Which means my logs are the same as Sirshak's. But in any case I attached >>> the output from a run with V=2. >>> >>> I can provide other info if there's more you need - or you can try >>> accessing one of our ThunderX's in the FD.io lab if you have access. >>> >>> Thanks, >>> Juraj >>> >>> From: Ole Troan [mailto:otr...@employees.org] >>> Sent: Tuesday, November 27, 2018 5:43 PM >>> To: Juraj Linkeš >>> Cc: Sirshak Das ; vpp-dev@lists.fd.io; Honnappa >>> Nagarahalli ; Lijian Zhang (Arm Technology >>> China) >>> Subject: Re: [vpp-dev] Build failing on AArch64 >>> >>> Juraj, >>> >>> Without a make log this is just a guessing game. >>> >>> Cheers >>> Ole >>> >>> On 27 Nov 2018, at 17:34, Juraj Linkeš wrote: >>> >>> Hi Sirshak and Ole, >>> >>> I'm hitting the same issue. The build fails on a clean repository, but the >>> subsequent build works fine, which is fine for local builds, but still >>> needs to be fixed. >>> >>> Running the build with V=2 doesn't actually produce more output. There one >>> more bit of information I can provide - this behavior is present on >>> Ubuntu1804 (4.15.0-38-generic), but builds on Ubuntu1604 >>> (4.4.0-138-generic) work right away, which explains why CI didn't catch it. >>> >>> This is the patch that introduced the issue: >>> https://gerrit.fd.io/r/#/c/16109/ >>> >>> Juraj >>> >>> From: Ole Troan [mailto:otr...@employees.org] >>> Sent: Monday, November 26, 2018 9:26 AM >>> To: Sirshak Das >>> Cc: vpp-dev@lists.fd.io; Honnappa Nagarahalli >>> ; Juraj Linkeš ; >>> Lijian Zhang (Arm Technology China) >>> Subject: Re: [vpp-dev] Build failing on AArch64 >>> >>> Sirshak, >>> >>> Can you touch one of the .api files and rebuild with V=2 and show the >>> output of that? >>> It might be that vppapigen fails for some reason (or try to run it manually >>> and see). >>> >>> Ole >>> On 26 Nov 2018, at 06:48, Sirshak Das wrote: Hi all, I am currently f
Re: [vpp-dev] vppcom: why __vcl_worker_index is thread local? #vpp
Ok, thank you for the clarification. So, as far as I understand, host-stack preloading is not intended to work with forkable(because of the ldp destructor) and/or threadable(because of mentioned index) applications. BR, Manuel -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11443): https://lists.fd.io/g/vpp-dev/message/11443 Mute This Topic: https://lists.fd.io/mt/28286895/21656 Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] SIGSEGV after calling vlib_get_frame_to_node
None of the routine names in the backtrace exist in master/latest – it’s your code - so it will be challenging for the community to help you. See if you can repro the problem with a TAG=vpp_debug images (aka “make build” not “make build-release”). If you’re lucky, one of the numerous ASSERTs will catch the problem early. vlib_get_frame_to_node(...) is not new code, it’s used all over the place, and it needs “help” to fail as shown below. D. From: vpp-dev@lists.fd.io On Behalf Of Hugo Garza Sent: Tuesday, November 27, 2018 7:39 PM To: vpp-dev@lists.fd.io Subject: [vpp-dev] SIGSEGV after calling vlib_get_frame_to_node Hi vpp-dev, I'm seeing a crash when I enable our application with multiple works. Nov 26 14:29:32 vnet[64035]: received signal SIGSEGV, PC 0x7f6979a12ce8, faulting address 0x7fa6cd0bd444 Nov 26 14:29:32 vnet[64035]: #0 0x7f6a812743d8 0x7f6a812743d8 Nov 26 14:29:32 vnet[64035]: #1 0x7f6a80bc56d0 0x7f6a80bc56d0 Nov 26 14:29:32 vnet[64035]: #2 0x7f6979a12ce8 vlib_frame_vector_args + 0x10 Nov 26 14:29:32 vnet[64035]: #3 0x7f6979a16a2c tcpo_enqueue_to_output_i + 0xf4 Nov 26 14:29:32 vnet[64035]: #4 0x7f6979a16b23 tcpo_enqueue_to_output + 0x25 Nov 26 14:29:32 vnet[64035]: #5 0x7f6979a33fba send_packets + 0x7f2 Nov 26 14:29:32 vnet[64035]: #6 0x7f6979a346f8 connection_tx + 0x17e Nov 26 14:29:32 vnet[64035]: #7 0x7f6979a34f08 tcpo_dispatch_node_fn + 0x7fa Nov 26 14:29:32 vnet[64035]: #8 0x7f6a81248cb6 vlib_worker_loop + 0x6a6 Nov 26 14:29:32 vnet[64035]: #9 0x7f6a8094f694 0x7f6a8094f694 Running on CentOS 7.4 with kernel 3.10.0-693.el7.x86_64 VPP Version: v18.10-13~g00adcce~b60 Compiled by: root Compile host: b0f32e97e93a Compile date: Mon Nov 26 09:09:42 UTC 2018 Compile location: /w/workspace/vpp-merge-1810-centos7 Compiler: GCC 7.3.1 20180303 (Red Hat 7.3.1-5) Current PID: 9612 On a Cisco server with 2 socket Intel Xeon E5-2697Av4 @ 2.60GHz and 2 Intel X520 NICs. T-Rex traffic generator is hooked up on the other end to provided data at about 5Gbps per NIC. ./t-rex-64 --astf -f astf/nginx_wget.py -c 14 -m 4 -d 3000 startup.conf unix { nodaemon interactive log /opt/tcpo/logs/vpp.log full-coredump cli-no-banner #startup-config /opt/tcpo/conf/local.conf cli-listen /run/vpp/cli.sock } api-trace { on } heapsize 3G cpu { main-core 1 corelist-workers 2-5 } tcpo { runtime-config /opt/tcpo/conf/runtime.conf session-pool-size 1024000 } dpdk { dev :86:00.0 { num-rx-queues 1 } dev :86:00.1 { num-rx-queues 1 } dev :84:00.0 { num-rx-queues 1 } dev :84:00.1 { num-rx-queues 1 } num-mbufs 1024000 socket-mem 4096,4096 } plugin_path /usr/lib/vpp_plugins api-segment { gid vpp } Here's the function where the SIGSEGV is happening: static void enqueue_to_output_i(tcpo_worker_ctx_t * wrk, u32 bi, u8 flush) { u32 *to_next, next_index; vlib_frame_t *f; TRACE_FUNC_VAR(bi); next_index = tcpo_output_node.index; /* Get frame to output node */ f = wrk->tx_frame; if (!f) { f = vlib_get_frame_to_node(wrk->vm, next_index); ASSERT (clib_mem_is_heap_object (f)); wrk->tx_frame = f; } ASSERT (clib_mem_is_heap_object (f)); to_next = vlib_frame_vector_args(f); to_next[f->n_vectors] = bi; f->n_vectors += 1; if (flush || f->n_vectors == VLIB_FRAME_SIZE) { TRACE_FUNC_VAR2(flush, f->n_vectors); vlib_put_frame_to_node(wrk->vm, next_index, f); wrk->tx_frame = 0; } } I've observed that after a few Gbps of traffic go through and we call vlib_get_frame_to_node the pointer f that gets returned points to a chunk of memory that is invalid as confirmed by the assert statement that I added afterwards right below. Not sure how to progress further on tracking down this issue, any help or advice would be much appreciated. Thanks, Hugo -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11444): https://lists.fd.io/g/vpp-dev/message/11444 Mute This Topic: https://lists.fd.io/mt/28408842/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] question about multicast mpls
Hi guys, I found "multicast" in the mpls cli. Is the vpp support multicast mpls now ? Is there any example show about multicast mpls? Thank you very much for your reply. Thanks, Xue -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11445): https://lists.fd.io/g/vpp-dev/message/11445 Mute This Topic: https://lists.fd.io/mt/28430049/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] question about ROSEN MVPN
Hi guys, Is the vpp support the forwarding about ROSEN MVPN(described in RFC 6037)? Or is the vpp support roadmap? Thank you very much for your reply. Thanks, Xue -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11446): https://lists.fd.io/g/vpp-dev/message/11446 Mute This Topic: https://lists.fd.io/mt/28430065/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Regarding own pthreads
Hi, If I spawn some of my own pthreads from the main thread, then can I use the clib functions inside my own pthread, eg. clib_mem_alloc/free safely ? In general, are there any guidelines to be followed regarding own pthreads, would appreciate any input on this front. Regards -Prashant -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11447): https://lists.fd.io/g/vpp-dev/message/11447 Mute This Topic: https://lists.fd.io/mt/28430132/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Regarding own pthreads
The main vpp heap is thread-safe, so yes, you can use clib_mem_alloc(...) / clib_mem_free(...). Please use the VLIB_REGISTER_THREAD(...) macro, which offers a number of controls beyond "just go spin up a thread..." D. -Original Message- From: vpp-dev@lists.fd.io On Behalf Of Prashant Upadhyaya Sent: Wednesday, November 28, 2018 7:21 AM To: vpp-dev@lists.fd.io Subject: [vpp-dev] Regarding own pthreads Hi, If I spawn some of my own pthreads from the main thread, then can I use the clib functions inside my own pthread, eg. clib_mem_alloc/free safely ? In general, are there any guidelines to be followed regarding own pthreads, would appreciate any input on this front. Regards -Prashant -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11448): https://lists.fd.io/g/vpp-dev/message/11448 Mute This Topic: https://lists.fd.io/mt/28430132/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] question about multicast mpls
Hi Xue, MPLS multicast has been supported for a while. Please see the unit tests for examples: test/test_mpls.py test_mcast_*() Regards, Neale De : au nom de xyxue Date : mercredi 28 novembre 2018 à 13:04 À : vpp-dev Objet : [vpp-dev] question about multicast mpls Hi guys, I found "multicast" in the mpls cli. Is the vpp support multicast mpls now ? Is there any example show about multicast mpls? Thank you very much for your reply. Thanks, Xue -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11449): https://lists.fd.io/g/vpp-dev/message/11449 Mute This Topic: https://lists.fd.io/mt/28430049/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] question about ROSEN MVPN
Hi Xue, To my knowledge it has not been tried nor tested. GRE interfaces today do not support a multicast destination address. However, other tunnel types (like VXLAN) do so adding support shouldn’t be too hard. After that the mfib support egress out of any interface type. I also have a draft in the pipeline that supports recursing through an mfib entry that will simplify multicast tunnel implementations. Regards, neale De : au nom de xyxue Date : mercredi 28 novembre 2018 à 13:06 À : vpp-dev Objet : [vpp-dev] question about ROSEN MVPN Hi guys, Is the vpp support the forwarding about ROSEN MVPN(described in RFC 6037)? Or is the vpp support roadmap? Thank you very much for your reply. Thanks, Xue -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11450): https://lists.fd.io/g/vpp-dev/message/11450 Mute This Topic: https://lists.fd.io/mt/28430065/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] #VPP_1804 What causes messages "binary API client 'client_name' died" and "cleanup ghost pid 'PID_NO' " to apear in vppctl #binapi #vpp
Dear friends, I was wondering what could cause these messages to be appeared on vppctl console; vl_mem_send_client_keepalive_w_reg:539: REAPER: binary API client 'client_name' died and svm_client_scan_this_region_nolock:1249: /global_vm: cleanup ghost pid PID_NO I am developing on VPP 18.04 and these clib_warnings appearing in vppctl a lot. I will be very thankful if someone could guide me through this. yours sincerely, Seyyed Mojtaba Rezvani -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11451): https://lists.fd.io/g/vpp-dev/message/11451 Mute This Topic: https://lists.fd.io/mt/28431435/21656 Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452 Mute #vpp_1804: https://lists.fd.io/mk?hashtag=vpp_1804&subid=1480452 Mute #binapi: https://lists.fd.io/mk?hashtag=binapi&subid=1480452 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] vppcom: why __vcl_worker_index is thread local? #vpp
I am working on enabling forking with LDP but keep in mind that we mainly support it for testing purposes. That is, it can’t work with statically linked applications and we don’t plan on supporting all possible socket/setsockopts/getsockopts/fcntl options. Florin > On Nov 28, 2018, at 3:15 AM, manuel.alo...@cavium.com wrote: > > Ok, thank you for the clarification. So, as far as I understand, host-stack > preloading is not intended to work with forkable(because of the ldp > destructor) and/or threadable(because of mentioned index) applications. > > > BR, > Manuel -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#11443): https://lists.fd.io/g/vpp-dev/message/11443 > Mute This Topic: https://lists.fd.io/mt/28286895/675152 > Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [fcoras.li...@gmail.com] > -=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11452): https://lists.fd.io/g/vpp-dev/message/11452 Mute This Topic: https://lists.fd.io/mt/28286895/21656 Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Verify issues (GRE)
Guys, The verify job have been unstable over the last few days. We see some instability in the Jenkins build system, in the test harness itself, and in the tests. On my 18.04 machine I’m seeing intermittent failures in GRE, GBP, DHCP, VCL. It looks like Jenkins is functioning correctly now. Ed and I are also testing a revert of all the changes made to the test framework itself over the last couple of days. A bit harsh, but we think this might be the quickest way back to some level of stability. Then we need to fix the tests that are in themselves unstable. Any volunteers to see if they can figure out why GRE fails? Cheers, Ole GRE Test Case == GRE IPv4 tunnel TestsOK GRE IPv6 tunnel TestsOK GRE tunnel L2 Tests OK 19:37:47,505 Unexpected packets captured: Packet #0: 0201FF0202FE70A06AD308004500 p.j...E. 0010 002A00013F11219FAC100101AC10 .*?.!... 0020 010204D204D2001672A9343336392033 r.4369 3 0030 2033202D31202D31 3 -1 -1 ###[ Ethernet ]### dst = 02:01:00:00:ff:02 src = 02:fe:70:a0:6a:d3 type = IPv4 ###[ IP ]### version = 4 ihl = 5 tos = 0x0 len = 42 id= 1 flags = frag = 0 ttl = 63 proto = udp chksum= 0x219f src = 172.16.1.1 dst = 172.16.1.2 \options \ ###[ UDP ]### sport = 1234 dport = 1234 len = 22 chksum= 0x72a9 ###[ Raw ]### load = '4369 3 3 -1 -1' Ten more packets ###[ UDP ]### sport = 1234 dport = 1234 len = 22 chksum= 0x72a9 ###[ Raw ]### load = '4369 3 3 -1 -1' ** Ten more packets Print limit reached, 10 out of 257 packets printed 19:37:47,770 REG: Couldn't remove configuration for object(s): 19:37:47,770 GRE tunnel VRF Tests ERROR [ temp dir used by test case: /tmp/vpp-unittest-TestGRE-hthaHC ] == ERROR: GRE tunnel VRF Tests -- Traceback (most recent call last): File "/vpp/16257/test/test_gre.py", line 61, in tearDown super(TestGRE, self).tearDown() File "/vpp/16257/test/framework.py", line 546, in tearDown self.registry.remove_vpp_config(self.logger) File "/vpp/16257/test/vpp_object.py", line 86, in remove_vpp_config (", ".join(str(x) for x in failed))) Exception: Couldn't remove configuration for object(s): 1:2.2.2.2/32 == FAIL: GRE tunnel VRF Tests -- Traceback (most recent call last): File "/vpp/16257/test/test_gre.py", line 787, in test_gre_vrf remark="GRE decap packets in wrong VRF") File "/vpp/16257/test/vpp_pg_interface.py", line 264, in assert_nothing_captured (self.name, remark)) AssertionError: Non-empty capture file present for interface pg0 (GRE decap packets in wrong VRF)-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11453): https://lists.fd.io/g/vpp-dev/message/11453 Mute This Topic: https://lists.fd.io/mt/28473762/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] DPDK 18.11 (stable LTE)
Hello, Can someone tell me what the next release of VPP will be, which will use DPDK-18.11? I am a bit hesitant about using a non-stable release, e.g. DPDK:18.08. I am asking as I have seen several email responses here advocating folks move to VPP:18.10, which of course uses DPDK:18.08. Will VPP:19.01 be picking up DPDK:18.11? Regards, Mike -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11454): https://lists.fd.io/g/vpp-dev/message/11454 Mute This Topic: https://lists.fd.io/mt/28480575/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] DPDK 18.11 (stable LTE)
— Damjan > On 28 Nov 2018, at 21:49, Bly, Mike wrote: > > Hello, > > Can someone tell me what the next release of VPP will be, which will use > DPDK-18.11? I am a bit hesitant about using a non-stable release, e.g. > DPDK:18.08. I am asking as I have seen several email responses here > advocating folks move to VPP:18.10, which of course uses DPDK:18.08. Will > VPP:19.01 be picking up DPDK:18.11? > Already in gerrit https://gerrit.fd.io/r/#/c/16214/ Will be merged after a bit of testing... > - -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11455): https://lists.fd.io/g/vpp-dev/message/11455 Mute This Topic: https://lists.fd.io/mt/28480575/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [**EXTERNAL**] Re: [vpp-dev] DPDK 18.11 (stable LTE)
As always, one step ahead of my questions. ☺ Thank you, Mike From: vpp-dev@lists.fd.io On Behalf Of Damjan Marion via Lists.Fd.Io Sent: Wednesday, November 28, 2018 12:55 PM To: Bly, Mike Cc: vpp-dev@lists.fd.io Subject: [**EXTERNAL**] Re: [vpp-dev] DPDK 18.11 (stable LTE) — Damjan On 28 Nov 2018, at 21:49, Bly, Mike mailto:m...@ciena.com>> wrote: Hello, Can someone tell me what the next release of VPP will be, which will use DPDK-18.11? I am a bit hesitant about using a non-stable release, e.g. DPDK:18.08. I am asking as I have seen several email responses here advocating folks move to VPP:18.10, which of course uses DPDK:18.08. Will VPP:19.01 be picking up DPDK:18.11? Already in gerrit https://gerrit.fd.io/r/#/c/16214/ Will be merged after a bit of testing... - -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11456): https://lists.fd.io/g/vpp-dev/message/11456 Mute This Topic: https://lists.fd.io/mt/28480983/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-