Aside from this p2p_ethernet load test, the only failing extended test is the 
whole TestVxlanGpe class.

The reason is that scapy 2.3.3 does not know of layer VXLAN, and there is no 
scapy patch within vpp to address this.

How do you want to handle this ?


--

Gabriel Ganne

________________________________
From: vpp-dev-boun...@lists.fd.io <vpp-dev-boun...@lists.fd.io> on behalf of 
Brian Brooks <brian.bro...@arm.com>
Sent: Tuesday, November 14, 2017 1:27:47 PM
To: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco); Ole Troan
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] make test-all

It does not complete within 20 minutes on other machines.

> I don't think we should do scale tests as part of verification tests.

Agree


-----Original Message-----
From: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco) 
[mailto:ksek...@cisco.com]
Sent: Tuesday, November 14, 2017 2:47 AM
To: Ole Troan <otr...@employees.org>
Cc: Dave Barach (dbarach) <dbar...@cisco.com>; John Lo (loj) <l...@cisco.com>; 
Pavel Kotucek -X (pkotucek - PANTHEON TECHNOLOGIES at Cisco) 
<pkotu...@cisco.com>; vpp-dev@lists.fd.io; Brian Brooks <brian.bro...@arm.com>
Subject: Re: [vpp-dev] make test-all

It is ...

it takes ~10 seconds to create 1000 subifs on a beefy UCS and the case tries to 
create 100000, so that would indicate a runtime of ~16minutes...

Quoting Ole Troan (2017-11-14 03:53:28)
> Klement,
>
> > I don't know what to tell you ... I was never a fan of getting the
> > API trace post test run and putting it into test log (which is the
> > cause of 25MB allocation in this case - it's the CLI output) - from
> > my POV this is duplicate information as every API call is already
> > logged in-place when it's executed... BUT I didn't saw any harm in
> > doing so (besides clutter) and since the "create bazillion
> > subinterfaces" test went in without me being part of review process, I 
> > wasn't aware of it...
>
> If this is in test_p2p_ethernet, let me take that out.
> I don't think we should do scale tests as part of verification tests.
> I do not like the path we're on with regards to test run time.
>
> Cheers,
> Ole
>
> > Quoting Dave Barach (dbarach) (2017-11-13 13:33:41)
> >> Try increasing the size of the shared-memory API segment. An allocation of 
> >> 25mb is failing. You might ask yourself how sane it is to generate that 
> >> much output.
> >>
> >> Thanks… Dave
> >>
> >> -----Original Message-----
> >> From: vpp-dev-boun...@lists.fd.io
> >> [mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Klement Sekera -X
> >> (ksekera - PANTHEON TECHNOLOGIES at Cisco)
> >> Sent: Monday, November 13, 2017 5:27 AM
> >> To: John Lo (loj) <l...@cisco.com>; Pavel Kotucek -X (pkotucek -
> >> PANTHEON TECHNOLOGIES at Cisco) <pkotu...@cisco.com>;
> >> vpp-dev@lists.fd.io; Brian Brooks <brian.bro...@arm.com>
> >> Subject: Re: [vpp-dev] make test-all
> >>
> >> So it seems that vpp coredumps while dumping the API trace after
> >> creating all the interfaces...
> >>
> >> (gdb) bt
> >> #0  0x00007f14f4b1e428 in __GI_raise (sig=sig@entry=6) at
> >> ../sysdeps/unix/sysv/linux/raise.c:54
> >> #1  0x00007f14f4b2002a in __GI_abort () at abort.c:89
> >> #2  0x0000000000405d83 in os_panic () at
> >> /home/ksekera/vpp/build-data/../src/vpp/vnet/main.c:268
> >> #3  0x00007f14f5fe5f86 in clib_mem_alloc_aligned_at_offset 
> >> (os_out_of_memory_on_failure=1, align_offset=0, align=1, size=25282098)
> >>    at /home/ksekera/vpp/build-data/../src/vppinfra/mem.h:105
> >> #4  clib_mem_alloc (size=25282098) at
> >> /home/ksekera/vpp/build-data/../src/vppinfra/mem.h:114
> >> #5  vl_msg_api_alloc_internal (may_return_null=0, pool=<optimized out>, 
> >> nbytes=25282098)
> >>    at
> >> /home/ksekera/vpp/build-data/../src/vlibmemory/memory_shared.c:176
> >> #6  vl_msg_api_alloc (nbytes=nbytes@entry=25282082) at
> >> /home/ksekera/vpp/build-data/../src/vlibmemory/memory_shared.c:207
> >> #7  0x0000000000411392 in vl_api_cli_inband_t_handler
> >> (mp=0x300e2a0c) at
> >> /home/ksekera/vpp/build-data/../src/vpp/api/api.c:223
> >> #8  0x00007f14f5fdfa23 in vl_msg_api_handler_with_vm_node 
> >> (am=am@entry=0x7f14f620d460 <api_main>, the_msg=the_msg@entry=0x300e2a0c,
> >>    vm=vm@entry=0x7f14f5fd6260 <vlib_global_main>,
> >> node=node@entry=0x7f14b410e000) at
> >> /home/ksekera/vpp/build-data/../src/vlibapi/api_shared.c:508
> >> #9  0x00007f14f5fef35f in memclnt_process (vm=0x7f14f5fd6260 
> >> <vlib_global_main>, node=0x7f14b410e000, f=<optimized out>)
> >>    at
> >> /home/ksekera/vpp/build-data/../src/vlibmemory/memory_vlib.c:970
> >>
> >> (gdb) p input
> >> $5 = {buffer = 0x7f14b56f6558 "dump
> >> /tmp/vpp-unittest-P2PEthernetAPI-qRwMY6/vpp_api_trace.test_p2p_subi
> >> f_creation_10k.log\n",  index = 18446744073709551615, buffer_marks
> >> = 0x7f14b592a240, fill_buffer = 0x0, fill_buffer_arg = 0x0}
> >>
> >> I'm pretty sure that the history of this mess was:
> >>
> >> 1.) the test was added first as enhanced
> >> 2.) automatic dump of api trace was added, but only tested against 'make 
> >> test', not 'make test-all'
> >>
> >> Thanks,
> >> Klement
> >>
> >> Quoting Klement Sekera (2017-11-11 22:12:52)
> >>> Hi Brian,
> >>>
> >>> it should. Though I just tried running it on latest master and got
> >>> a timeout in test_p2p_ethernet, which shouldn't happen. I see the
> >>> test was trying to create tens of thousands of interfaces... maybe
> >>> something is slower than usual?
> >>>
> >>> Thanks,
> >>> Klement
> >>>
> >>> Quoting Brian Brooks (2017-11-11 01:11:47)
> >>>>   Should “make test-all” pass?
> >>>>
> >>>>
> >>>>
> >>>>   Thanks,
> >>>>
> >>>>   Brian
> >>>>
> >>>>
> >>>>
> >>>>   IMPORTANT NOTICE: The contents of this email and any attachments are
> >>>>   confidential and may also be privileged. If you are not the intended
> >>>>   recipient, please notify the sender immediately and do not disclose the
> >>>>   contents to any other person, use it for any purpose, or store or copy 
> >>>> the
> >>>>   information in any medium. Thank you.
> >>> _______________________________________________
> >>> vpp-dev mailing list
> >>> vpp-dev@lists.fd.io
> >>> https://lists.fd.io/mailman/listinfo/vpp-dev
> >> _______________________________________________
> >> vpp-dev mailing list
> >> vpp-dev@lists.fd.io
> >> https://lists.fd.io/mailman/listinfo/vpp-dev
> > _______________________________________________
> > vpp-dev mailing list
> > vpp-dev@lists.fd.io
> > https://lists.fd.io/mailman/listinfo/vpp-dev
>
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
  • [vpp-dev] mak... Brian Brooks
    • Re: [vpp... Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
      • Re: ... Gabriel Ganne
        • ... Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
      • Re: ... Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
        • ... Dave Barach (dbarach)
          • ... Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
            • ... Ole Troan
              • ... Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
                • ... Brian Brooks
                • ... Gabriel Ganne
                • ... Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)

Reply via email to