Thanks Steven for the response.
As per VPP 18.01 , only Bonded interface state is show in "show interface "
CLI .
Thanks,
Chetan
On Mon, Apr 20, 2020 at 8:49 PM Steven Luong (sluong)
wrote:
> First, your question has nothing to do with bonding. Whatever you are
> seeing is true regardless of
> 3n-hsw
Around 2nd of April there happened a regression
in eth-l2bdscale1mmaclrn test, visible here [0].
Trending needed multiple runs to identify it is there,
and alerting is configured not to report "old" regressions
(so it did not report this one).
Anyway, the closer look shows previously the
First, your question has nothing to do with bonding. Whatever you are seeing is
true regardless of bonding configured or not.
Show interfaces displays the admin state of the interface. Whenever you set the
admin state to up, it is displayed as up regardless of the physical carrier is
up or down
> -rnd-mrr
This is a consequence of [0],
fixing an old bug in CSIT code.
Previously, the traffic was not random enough.
Vratko.
[0] https://gerrit.fd.io/r/c/csit/+/26456
-Original Message-
From: csit-rep...@lists.fd.io On Behalf Of
nore...@jenkins.fd.io
Sent: Saturday, 2020-April-18 1
Ack, thanks for the info.
From: Jan Gelety -X (jgelety - PANTHEON TECH SRO at Cisco)
Sent: Monday, April 20, 2020 4:40 AM
To: Dave Barach (dbarach) ; csit-...@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: RE: perftest stuck: "do_not_use_dut2_ssd_failure"
Hello Dave,
3n-skx perf job has been abor
> 3n-hsw
Progressions showing the fixes described in [4]
do restore the previous performance.
Vratko.
[4] https://lists.fd.io/g/csit-report/message/2488
-Original Message-
From: csit-rep...@lists.fd.io On Behalf Of
nore...@jenkins.fd.io
Sent: Sunday, 2020-April-19 06:07
To: Fdio+Csit-
> 3n-hsw
This turned out to be a bug on CSIT side,
breaking numa detection, affecting every NIC on numa 1.
The bug was introduced in [0] and fixed in [1] and [2].
Vratko.
[0] https://gerrit.fd.io/r/c/csit/+/25363
[1] https://gerrit.fd.io/r/c/csit/+/26569
[2] https://gerrit.fd.io/r/c/csit/+/26572
Hello Dave,
3n-skx perf job has been aborted.
I guess you can use 2n-skx testbed to test your changes so, please, use trigger
perftest-2n-skx.
ETA for availability of 3n-skx perf testbeds is unknown at the moment as we are
waiting for new/repaired SSDs.
Regards,
Jan
From: vpp-dev@lists.fd.io
Hi Chris,
Comments inline...
On 15/04/2020 15:14, "Christian Hopps" wrote:
Hi Neale,
I agree that something like 4, is probably the correct approach. I had a
side-meeting with some of the ARM folks (Govind and Honnappa), and we thought
using a generation number for the state rather