Hi Klement,
Thanks for the patch file.
The fix works.
Now I could see that once BFD session goes DOWN due to inactivity timer,
remote discriminator is set 0 in BFD control packet.
On Fri, Jan 17, 2020 at 3:37 PM Klement Sekera -X (ksekera - PANTHEON TECH
SRO at Cisco) wrote:
> Hi,
>
> thank you
Hi,
I am using fdio 1810 version.
I observed that why I try to configure route with more than 2 paths, in show ip
fib output it displays many duplicates entries.
This is what I am trying, I have 3 interfaces as below
vpp# show interface address
VirtualFunctionEthernet0/6/0 (up):
VirtualFunctionE
s.fd.io&q=subject:%22%5C%5Bvpp%5C-dev%5C%5D+multipath+dpo+buckets+is+wrong.%22&o=newest&f=1
>
>
>
>
>
> /neale
>
>
>
> *From: * on behalf of sontu mazumdar <
> sont...@gmail.com>
> *Date: *Friday 27 March 2020 at 15:47
> *To: *"vpp-dev@
Hi,
I am seeing VPP crash during ipv6 address delete, below is the backtrace
Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
fib_entry_delegate_get (fib_entry=fib_entry@entry=0x80214a9e9af4,
type=type@entry=FIB_ENTRY_DELEGATE_COVERED)
at
/usr/src/debug/vpp-18.10-35~g7002cae21
Hi,
I am using FDIO 20.05 version.
Here we are trying to configure a loopback interface via VAPI.
But in our testing we see that VPP is crashing, the crash is very hard to
reproduce and seen only 2-3 times till now.
Below is the bt
#0 0x7f09134041a2 in hash_memory64 (state=,
n_bytes=, p=)
Hi,
We are very inconsistently seeing a VPP crash while try to access node_by_name
hash, it looks like the node_by_name is getting corrupted.
The bt looks like below
* Frame 01: /lib64/libvppinfra.so.20.05.1(hash_memory+0x34)
[0x7f1dae34c8a4]
* Frame 02: /lib64/libvppinfra.so.20.05.1
Hi,
I observe that in node_by_name hash we store a node with name
"unix-cli-local:0" and node index 720 (not sure the purpose of the node).
The node name is stored as key in the node_by_name hash.
But later at some time when I print the node_by_name hash's each entry I see
the key of node i.e th
vec_free (old_name);
>
>vlib_node_set_state (vm, n->index, VLIB_NODE_STATE_POLLING);
>
> -
>
>_vec_len (cm->unused_cli_process_node_indices) = l - 1;
>
> }
>
>else
>
>
>
> *From:* vpp-dev@lists.fd.io *On Behalf Of *sontu
> mazumdar
> *Sen
Thanks for the patch Dave.
With this I am not seeing the corruption
issue of node hash key in node_by_name hash.
Regards,
Sontu
On Sat, 12 Jun, 2021, 11:09 AM sontu mazumdar, wrote:
> Thanks Dave.
> I will try with your suggested code changes and will share the result.
>
> Rega
Hi Dave,
This hash node key corruption I observed while debugging a VPP crash due to
node_by_name hash access which seems to be corrupted.
So I thought the unix-cli-local node might be the root cause, but after the
patch also we saw the crash again. The bt looks like below.
* Frame 00: /li
Yes Dave, we are initiating a loopback delete from our application. But I don't
find anything suspicious in this code path, my guess is may be the hash was
already corrupted.
Regarding my question of thread_barrier lock, I was refering to the current
patch in unix_cli_file_add() where we are mod
11 matches
Mail list logo