Folks,
As you may have already noticed, Jenkins job operations have returned to
normal.
The root cause of this outage was an error by a Vexxhost technician
incorrectly updating firewall rules that isolated a secondary data
center from the primary one hosting the FD.io Nomad hosts. Many CSIT
Thanks Andrew. I will fix the issue and get back to you.
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Andrew
> Yourtchenko via lists.fd.io
> Sent: Wednesday, May 27, 2020 4:51 PM
> To: Govindarajan Mohandoss
> Cc: vpp-dev@lists.fd.io; Lijian Zhang ; Jieqiang
> Wang ; Hon
Thanks Neale. If will fix it and recheck.
> -Original Message-
> From: Neale Ranns (nranns)
> Sent: Thursday, May 28, 2020 1:56 AM
> To: Andrew Yourtchenko ; Govindarajan Mohandoss
>
> Cc: vpp-dev@lists.fd.io; Lijian Zhang ; Jieqiang
> Wang ; Honnappa Nagarahalli
> ; nd
> Subject: Re: [
The problem is the aforementioned commit added a call to invoke
vnet_hw_interface_set_flags() in the worker thread. That is no can do. We are
in the process of reverting the commit.
Steven
On 5/29/20, 10:02 AM, "vpp-dev@lists.fd.io on behalf of Elias Rudberg"
wrote:
Hello,
We now g
Adding csit-dev to the thread...
On 5/29/2020 1:03 PM, Dave Wallace via lists.fd.io wrote:
FYI, I have opened a case with Vexxhost:
https://secure.vexxhost.com/billing/viewticket.php?tid=QDU-864405&c=xgaBi2wP
On 5/29/2020 12:56 PM, Dave Wallace via lists.fd.io wrote:
Folks,
There has been an
FYI, I have opened a case with Vexxhost:
https://secure.vexxhost.com/billing/viewticket.php?tid=QDU-864405&c=xgaBi2wP
On 5/29/2020 12:56 PM, Dave Wallace via lists.fd.io wrote:
Folks,
There has been an outage in the Nomad cluster (2 nodes offline) which
is currently causing VPP jenkins jobs t
Hello,
We now get this kind of error for the current master branch (5bb3e81e):
vlib_worker_thread_barrier_sync_int: worker thread deadlock
Testing previous commits indicates the problem started with the recent
commit 9121c415 "bonding: adjust link state based on active slaves"
(AuthorDate May 18
Folks,
There has been an outage in the Nomad cluster (2 nodes offline) which is
currently causing VPP jenkins jobs to not execute. I'm working on
getting hold of Vexxhost to get the servers that are down back online.
Apparently raft doesn't handle multiple servers very well :(
Will post upda
Hello,
The following two fixes were recently merged to the master branch.
Could they please be included in the stable/2005 branch also?
https://gerrit.fd.io/r/c/vpp/+/27280 (misc: ipfix-export unformat u16
collector_port fix)
https://gerrit.fd.io/r/c/vpp/+/27281 (nat: fix regarding vm arg for
vl
Hi Andreas,
Unfortunately, the proxy sample app has not been tested under heavy load. It’s
just an example test app.
If you have a patch, please do push it to avoid the issue in the future.
Otherwise, I’ll take a look at it.
Regards,
Florin
> On May 29, 2020, at 6:29 AM, Andreas Schultz
Hi,
Tracked it down. It's not the session logic, but the connection handling in
the proxy sample hs_app.
The proxy is calling vnet_disconnect_session on both sessions
in delete_proxy_session. That function is invoked from the disconnect and
reset callback. Turns out that calling vnet_disconnect_s
Hi Andreas,
The session hash table entry is cleaned up by session_delete, unless the
transport has somehow been altered and the lookup fails. Obviously the
transport should be still intact in closed state, hence the delete
notification. Do you see the warning message that the hash cannot be de
Hi,
It seems that the {v4,v6}_session_hash is leaking entries. When creating
large amounts of TCP session (incoming and outgoing) we are observing
entries in the session_hash pointing to already deleted sessions.
I have verified with a somewhat hackish check in session_delete that
sometimes a ses
I confirm the fix.
Thank you
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#16570): https://lists.fd.io/g/vpp-dev/message/16570
Mute This Topic: https://lists.fd.io/mt/74479551/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute
Hi Klement,
Thanks for your reply and clarification. I found the issue on my end, my bad.
On the dataplane plugin side of the api message handler:
>
>
> vl_api_session_add_t_handler(vl_api_session_add_t *mp)
I was making use of the endian generated functions in order to convert to host
byte o
[Edited Message Follows]
When I use vat connect vpp by shm, if vpp process is killed, vat can’t detect
disconnection, vat will not automatically quit
meanwhile I use some command in vat,I can get result “exec error: Misc",then I
restart vpp process, vat is still can't work,when I quit vat,it
[Edited Message Follows]
when I use vat connect vpp by shm, if vpp process is killed, vat can’t detect
disconnection, vat will not automatically quit,
meanwhile I use some command in vat,I can get result “exec error: Misc”
But When I use Vat connect vpp by Socket-name, if vpp process is killed
when I use vat connect vpp by shm, if vpp process is killed, vat can’t detect
disconnection, vat will not automatically quit,
meanwhile I use some commnand in vat,I can get result “exec error: Misc”
But When I use Vat connect vpp by Socket-name, if vpp process is killed, I use
some commnand in
18 matches
Mail list logo