Thanks Ping. Already Florin suggested to try out master code, I will try
that out and update. Somehow this mail was sent twice to mailing list by
mistake.

Regards,
Akshaya N

On Tue, Oct 29, 2019 at 12:07 PM Yu, Ping <ping...@intel.com> wrote:

> We have some test for VCL based on application too, and it looks fine. To
> understand more, you’d check what line in mspace_malloc trigger the signal.
>
>
>
> Attached is our vcl.conf for you to try if it is okay.
>
>
>
> vcl {
>
> rx-fifo-size 5000
>
> tx-fifo-size 5000
>
> app-scope-local
>
> app-scope-global
>
> api-socket-name /root/run/vpp/vpp-api.sock
>
> event-queue-size 1000000
>
> segment-size 1024000000
>
> add-segment-size 1024000000
>
> vpp-api-q-length 65536
>
> max-workers 36
>
> #use-mq-eventfd
>
> }
>
>
>
> *From:* vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] *On Behalf Of 
> *Akshaya
> Nadahalli
> *Sent:* Thursday, October 24, 2019 4:38 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] VPP crash in TCP FIFO allocation
>
>
>
> Hi,
>
>
>
> While testing VPP hoststack with large number of TCP connections, I see
> VPP crash in fifo allocation. Always crash is seen between 11k to 12k TCP
> connections. Changing vcl config -
> segment-size/add-segment-size/rx-fifo-size/tx-fifo-size doesn't change the
> behaviour - crash is always seen after establishing around 11k to 12k tcp
> sessions.
>
>
>
> Test client is using vppcom APIs. Callstack for crash is attached. I am
> using 19.08 code with non-blocking patch cherry-picked. All sessions are
> non-blocking. Anyone aware of any issues/defect fixes related to this?
>
>
>
> Regards,
>
> Akshaya N
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14373): https://lists.fd.io/g/vpp-dev/message/14373
Mute This Topic: https://lists.fd.io/mt/36994945/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to