Hi Sastry, As a first step, since you’ve just started development, I’d switch to vpp latest to get recent features and fixes.
Regarding the issue you’re hitting, in older releases you had to explicitly request memfd type of segment for the event queues (evt_qs_memfd_seg). Recently that has been made default, so you won’t need it anymore. So now, why did vpp crash? One thing that could've happened in older releases was dpdk requesting a large chunk of memory at an address space that overlapped what the session layer was using. You can configure session-baseva to a large number like 0x2000000000 and that might help. This is also not needed anymore in recent versions of vpp. Regarding some other configs I’ve noticed in previous emails: - socket-mem is not used anymore - no-multi-seg - any reason why you’d want to disable that? - try first with 1 worker instead of 6 - number of rx queues should be equal to number of workers for best performance - number of rx/tx descriptors is pretty large. Depending on use case you might be able to reduce down to 256 - depending on number of descriptors configured and number of workers you might be able to reduce buffers per numa Regards, Florin > On May 5, 2021, at 6:56 AM, Sastry Sista <sastry.si...@gmail.com> wrote: > > Hi Florin, > My VPP version is 20.05 fdio. > > I am suspecting seg_type why its not selecting SMH. > > Also, startup.conf does not have session stuff: > > session { > evt_qs_memfd_seg > event-queue-length 100000 > } > > when I add these, vpp is crashing. > > With Regards > Sastry > > > On Wed, May 5, 2021 at 6:01 PM Sastry Sista via lists.fd.io > <http://lists.fd.io/> <sastry.sista=gmail....@lists.fd.io > <mailto:gmail....@lists.fd.io>> wrote: > Hi Florin, > Thank you for the reply. > > I am trying to develop similar kind of envoy app using VCL. So, I need both > socket option for binary API and shared client for rx/tx for TCP data. > So, I need use-mq-eventfd at vcl.conf right?. ANyway I tried removing it but > still same issue. > > my VCL: > ====== > vcl { > rx-fifo-size 400000 > tx-fifo-size 400000 > app-scope-global > api-socket-name /run/vpp/api.sock > } > ========= > My VPP startup.conf: > ================== > unix { > nodaemon > log /var/log/vpp/vpp.log > full-coredump > cli-listen /run/vpp/cli.sock > } > > api-trace { > on > } > > socksvr { > default > } > > cpu { > main-core 1 > corelist-workers 2,3,4,5,6,7 > } > > buffers { > buffers-per-numa 128000 > > } > > dpdk { > dev default { > num-rx-queues 1 > num-tx-queues 1 > num-rx-desc 4096 > num-tx-desc 4096 > } > > dev 0000:00:06.0 > dev 0000:00:09.0 > no-multi-seg > socket-mem 2048,0 > } > > plugins > { > path > /usr/cna/bld-dataplane_base/base//cni-infra-dataplane/fdio/src/fdio/build-root/build-vpp_debug-native/vpp/lib/vpp_plugins/ > } > ==================== > > At VPP code: > =========== > > in function: > > 482 static int > 483 application_alloc_and_init (app_init_args_t * a) > 484 { > 485 ssvm_segment_type_t seg_type = SSVM_SEGMENT_MEMFD; > 486 segment_manager_props_t *props; > 487 vl_api_registration_t *reg; > 488 application_t *app; > 489 u64 *options; > 490 > ............... > > 496 if (!(options[APP_OPTIONS_FLAGS] & APP_OPTIONS_FLAGS_IS_BUILTIN)) > 497 { > 498 reg = vl_api_client_index_to_registration (a->api_client_index); > 499 if (!reg) > 500 return VNET_API_ERROR_APP_UNSUPPORTED_CFG; > 501 if (vl_api_registration_file_index (reg) == VL_API_INVALID_FI) > 502 seg_type = SSVM_SEGMENT_SHM; > 503 } > 504 else > 505 { > 506 seg_type = SSVM_SEGMENT_PRIVATE; > 507 } > 508 > 509 if ((options[APP_OPTIONS_FLAGS] & > APP_OPTIONS_FLAGS_EVT_MQ_USE_EVENTFD) > 510 && seg_type != SSVM_SEGMENT_MEMFD) > 511 { > 512 clib_warning ("mq eventfds can only be used if socket transport > is " > 513 "used for binary api"); > 514 return VNET_API_ERROR_APP_UNSUPPORTED_CFG; > 515 } > 516 > 517 if (!application_verify_cfg (seg_type)) > 518 return VNET_API_ERROR_APP_UNSUPPORTED_CFG; > > ........... > } > > Its hitting at 518 return error i.e application_verify_cfg (seg_type) > ....seg_type = SSVM_SEGMENT_MEMFD; > > gdb: > ==== > (gdb) p seg_type > $22 = SSVM_SEGMENT_MEMFD > (gdb) n > 492 options = a->options; > (gdb) p *a > $23 = {api_client_index = 0, name = 0x7fffce651780 "vcl_test_client[shm]", > options = 0x7ffad802af02, namespace_id = 0x7fffce651750 "", > session_cb_vft = 0x7ffff7f3f4e8 <session_mq_cb_vft>, app_index = 0} > (gdb) p *a->options > $24 = 163 > > With Regards > Sastry > > On Tue, May 4, 2021 at 8:13 PM Florin Coras <fcoras.li...@gmail.com > <mailto:fcoras.li...@gmail.com>> wrote: > Hi, > > Is there anything configured on vpp side for session layer? Is this vpp > 21.06rc0 or something older? The error number seems to suggest an older > release. > > One option would be to just comment out use-mq-eventfd and see if that fixes > the issue. Message queue eventfds should work with the binary api, but the > rest of the configs on vpp and vcl side must be compatible with it. > > Regards, > Florin > >> On May 4, 2021, at 4:23 AM, sastry.si...@gmail.com >> <mailto:sastry.si...@gmail.com> wrote: >> >> Hi, >> I am trying to use vcl_test_client and using below vcl config: >> >> While trying to run seeing the below error: >> >> vppcom_connect_to_vpp:502: vcl<1876:0>: app (vcl_test_client) is connected >> to VPP! >> vppcom_app_create:1203: vcl<1876:0>: sending session enable >> vppcom_app_create:1211: vcl<1876:0>: sending app attach >> vl_api_app_attach_reply_t_handler:82: vcl<0:-1>: ERROR attach failed: >> Unsupported application config (-108) >> >> Could you please let me know why this is unsupported at VPP? >> >> vcl { >> rx-fifo-size 400000 >> tx-fifo-size 400000 >> app-scope-global >> api-socket-name /run/vpp/api.sock >> #api-socket-name /run/vpp/cli.sock >> use-mq-eventfd >> } >> >> >> > > >
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19339): https://lists.fd.io/g/vpp-dev/message/19339 Mute This Topic: https://lists.fd.io/mt/82575076/21656 Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-