Hi Akshaya, 

Glad you were able to solve the issue. We’re slowly moving away from shm fifo 
segments, i.e., /dev/shm segments, in favor of memfd segments. 

Regards, 
Florin

> On Nov 4, 2019, at 3:11 AM, Akshaya Nadahalli <akshaya.nadaha...@gmail.com> 
> wrote:
> 
> Hi Florin,
> 
> This crash was due to setting hard limit on /dev/shm partition to 100 MB. 
> After increasing that I am able to scale more connections.
> 
> For fifo allocation, I see that we can use either shm or memfd. Is there any 
> recommendation/preference on which one to use? Does memfd also internally use 
> /dev/shm for memory allocation?
> 
> Regards,
> Akshaya N
> 
> On Fri, Oct 25, 2019 at 2:03 AM Akshaya Nadahalli 
> <akshaya.nadaha...@gmail.com <mailto:akshaya.nadaha...@gmail.com>> wrote:
> Sure, I will try to move to master and try it out. Will update once I am able 
> to test with master.
> 
> Regards,
> Akshaya N
> 
> On Thu, Oct 24, 2019 at 8:03 PM Florin Coras <fcoras.li...@gmail.com 
> <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Akshaya, 
> 
> Can you also try with master?
> 
> Thanks,
> Florin
> 
> > On Oct 24, 2019, at 4:35 AM, Akshaya Nadahalli <akshaya.nadaha...@gmail.com 
> > <mailto:akshaya.nadaha...@gmail.com>> wrote:
> > 
> > Hi,
> > 
> > While testing VPP hoststack with large number of TCP connections, I see VPP 
> > crash in fifo allocation. Always crash is seen between 11k to 12k TCP 
> > connections. Changing vcl config - 
> > segment-size/add-segment-size/rx-fifo-size/tx-fifo-size doesn't change the 
> > behaviour - crash is always seen after establishing around 11k to 12k tcp 
> > sessions.
> > 
> > Test client is using vppcom APIs. Callstack for crash is attached. I am 
> > using 19.08 code with non-blocking patch cherry-picked. All sessions are 
> > non-blocking. Anyone aware of any issues/defect fixes related to this?
> > 
> > Regards,
> > Akshaya N
> > <fifo_alloc_crash.txt>-=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> > 
> > View/Reply Online (#14334): https://lists.fd.io/g/vpp-dev/message/14334 
> > <https://lists.fd.io/g/vpp-dev/message/14334>
> > Mute This Topic: https://lists.fd.io/mt/36994945/675152 
> > <https://lists.fd.io/mt/36994945/675152>
> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev%2bow...@lists.fd.io>
> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
> > <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com 
> > <mailto:fcoras.li...@gmail.com>]
> > -=-=-=-=-=-=-=-=-=-=-=-
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14488): https://lists.fd.io/g/vpp-dev/message/14488
Mute This Topic: https://lists.fd.io/mt/36994945/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to