Hi Pragya Nand,
The reachability from 1.1.1.1 to 2.2.2.3 on the host is failing as ARP
request 1.1.1.1 -> 2.2.2.3 gets dropped in VPP. Normally both target and
source IPs for ARP are expected to be in the same subnet of the Rx
interface. You can try enabling proxy ARP on the Rx interface for the d
HI Sastry,
For VPN v4 session, labelled IPvx route(for ingressing) and mpls route(for
egressing) need to be set up. Can you check if they are getting
programmed correctly into VPP. The Labeled route seems to be OK from the
o/p you have pasted.
Thanks,
Rajith
On Fri, Mar 18, 2022 at 10:58 AM Sas
Hi Sastry,
The loop issue with BFD in our case we resolved by installing mpls php
route with eos. Earlier we had installed the mpls php route without eos.
Our case was mpls php with IPv4 forward. However from the config you shared
your case seems to be with MPLS ingress??
Thanks,
Rajith
On Fri,
Hi All,
We are observing a random crash in code that we have added in VPP. The
stack trace indicates an invalid memory access in _*hash_get*(). From the
hash table code we see that the hash table can auto resize and sink based
on the utilization.
So the question is whether we need to take a barri
Hi All,
We are exploring VPP's *NAT plugin *for PE router in an MPLS VPN
deployment. A reference diagram is given below.
[image: NAT-PE.png]
Private IP addresses are assigned to the hosts by the PE routers(NAT-PE and
PE-2). All the hosts in a VPN(Shop or Bank) are assigned unique IP
addresses
Hi VPP Reviewers,
We have been rebasing our downstream vpp version to the upstream version
since 2019 quite successfully.
But due to various reasons we have not been able to upstream the few fixes
that we have made in our downstream version.
The following patches we are submitting for review. Do
is getting built.
Thanks,
Rajith
On Fri, Feb 4, 2022 at 10:10 PM Dave Wallace wrote:
> Rajith,
>
> What OS are you building on and what VPP branch are you trying to build?
>
> Ubuntu-20.04/master:HEAD works for me.
>
> Thanks,
> -daw-
>
> On 2/4/22 10:29 AM, R
, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory:
'/home/supervisor/libvpp/build-root/install-vpp_debug-native/vp
Hi All,
We are trying to understand the VPP test framework. To get started we ran
an example suite (ip4 test) but it seems that the dependent executable(vpp)
is missing.
Please find the logs below.
*sudo make test TEST=test_ip4 vpp-install*
make -C /home/supervisor/libvpp/build-root PLATFORM=vpp
Hi All,
We are facing a random crash during the scale of MPLS tunnels(8000 mpls
tunnels). The crash has been observed multiple times and the call stack is
the same.
During the worker thread crash the main thread has executed the following
lines of code(between the barrier sync and release), please
Hi all,
Just to add to the query, I have observed that in interface configuration
is optional for NAT to work. All traffic get NATed if out interface is set
with output-feature.
Thanks,
Rajith
On Thu, 13 Jan 2022 at 7:06 AM, alekcejk via lists.fd.io wrote:
> Hi all,
>
> I am trying to get setu
also work, but it would mean a no-op
> restack for the tunnel. Not walking the new child is more efficient.
>
>
>
> /neale
>
>
>
> *From: *vpp-dev@lists.fd.io on behalf of Stanislav
> Zaikin via lists.fd.io
> *Date: *Thursday, 21 October 2021 at 17:58
> *T
Hi All,
We are seeing below crash when creating mpls tunnels. The issue is easily
reproducible just have to create around 100 mpls tunnels. It seems path_ext
is coming as NULL.
Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x7fe017bba268 in mpls_tunnel_collect_forwarding (p
HI Stanislav,
My guess is you don't have the commit below.
commit 8341f76fd1cd4351961cd8161cfed2814fc55103
Author: Dave Barach
Date: Wed Jun 3 08:05:15 2020 -0400
fib: add barrier sync, pool/vector expand cases
load_balance_alloc_i(...) is not thread safe when the
load_balance_po
(gdb) source ~/vpp/extras/gdb/gdbinit
> Loading vpp functions...
> Load vlLoad pe
> Load pifi
> Load node_name_from_index
> Load vnet_buffer_opaque
> Load vnet_buffer_opaque2
> Load bitmap_get
> Done loading vpp functions...
> (gdb) pifi load_balance_pool 16
> pool_is_free_i
,
Rajith
On Tue, Sep 14, 2021 at 5:27 PM Neale Ranns wrote:
>
>
> Hi Rajiyh,
>
>
>
> Maybe there’s something that still resolves through the tunnel when it’s
> deleted?
>
>
>
> /neale
>
>
>
> *From: *vpp-dev@lists.fd.io on behalf of Rajith PR
&g
Hi All,
We recently started using the VPP's mpls tunnel constructs for our L2 cross
connect application. In certain test scenarios we are seeing a crash in the
delete path of the mpls tunnel.
Any pointers to fix the issue would be really helpful.
Version: *20.09*
Call Stack:
Thread 1 (Thread 0x7
Hi Satya,
We migrated to 20.09 in March 2021. The crash has not been observed after
that. Not sure if some commit went between 20.05 and 20.09 that has fixed
or improved the situation.
Thanks,
Rajith
On Fri, Jul 9, 2021 at 10:19 AM Satya Murthy
wrote:
> Hi Rajith / Dave,
>
> We are on fdio.20
Hi Ben,
The problem seems to be due to external libraries that we have linked with
VPP. These external libraries have not been compiled with ASAN.
I could see that when those external libraries were suppressed through the
MyASAN.supp file, VPP started running with ASAN enabled.
Thanks,
Rajith
On
i.py.in#1
>
> > -----Original Message-
> > From: vpp-dev@lists.fd.io On Behalf Of Rajith PR
> via
> > lists.fd.io
> > Sent: mardi 25 mai 2021 09:51
> > To: vpp-dev
> > Subject: [vpp-dev]: Unable to run VPP with ASAN enabled
> >
> > Hi All,
> >
Hi All,
I am not able to run VPP with ASAN. Though we have been using VPP for
sometime this is the first time we enabled ASAN in the build.
I have followed the steps as mentioned in the sanitizer doc, can someone
please let me know what is missed here.
*Run Time Error(Missing symbol):*
/usr/loca
Hi All,
We did a VPP version upgrade from 19.08 to 20.09. I am seeing that the
socket state is -1 in 20.09 on one of the devices. When does this happen?
*20.09*
DBGvpp# show threads
ID NameTypeLWP Sched Policy (Priority)
lcore Core Socket State
0 vpp_main
#19 0x7ffaa0c7da0a in vl_msg_api_alloc_internal (nbytes=73, pool=0,
> may_return_null=0)
> at /development/libvpp/src/vlibmemory/memory_shared.c:177
> #20 0x7ffaa0c7db6f in vl_msg_api_alloc_as_if_client (nbytes=57) at
> /development/libvpp/src/vlibmemory/memory_shared.c:
Hi All,
We have hit a VPP Worker Thread Deadlock issue. And from the call stacks it
looks like the main thread is waiting for workers to come back to their
main loop( ie has taken the barrier lock) and one of the two workers is on
spin lock to make an rpc to the main thread.
I believe this lock is
];
>
>
>
>if(*vl_api_queue_cursizes[i])
>
>
>
>
>
> Capture a coredump. It should be obvious why the reference blows up. If
> you can, change your custom signal handler so that the faulting virtual
> address is as obvious as possible.
>
>
>
Hi All,
We are seeing a random crash in *VPP-19.08*. The crash is occurring in
memclnt_queue_callback
and it is in code that we are not using. Any pointers to fix the crash
would be helpful.
*Complete Call Stack:*
Thread 1 (Thread 0x7fe728f43d00 (LWP 189)):
#0 0x7fe728049492 in __GI___wait
Hi All,
We are integrating a *Linux pthread* with a *vpp thread* and are looking
for a *lockless queue/ring buffer implementation* that can be used.
In vpp infra i could see fifo and ring. But not sure if they can be used
for enqueue/dequeue from a pthread that VPP is not aware off.
Do you have an
Thanks,
Rajith
On Wed, Sep 2, 2020 at 8:15 PM Dave Barach (dbarach)
wrote:
> It looks like vpp is crashing while expiring timers from the main thread
> process timer wheel. That’s not been reported before.
>
>
>
> You might want to dust off .../extras/deprecated/vlib/unix/cj.[ch], and
&g
the timer which
> has expired.
>
>
>
> If you have > 1 timer per object and you manipulate timer B when timer A
> expires, there’s no guarantee that timer B isn’t already on the expired
> timer list. That’s almost always good for trouble.
>
>
>
> HTH... Dave
>
&
Hi All,
We are facing a crash in VPP's Timer wheel INFRA. Please find the details
below.
Version : *19.08*
Configuration: *2 workers and the main thread.*
Bactraces: thread apply all bt
Thread 1 (Thread 0x7ff41d586d00 (LWP 253)):
---Type to continue, or q to quit---
#0 0x7ff41c696722 in _
y them.
>
> Jerome
>
>
>
>
>
>
>
> *De : * au nom de "Rajith PR via lists.fd.io"
>
> *Répondre à : *"raj...@rtbrick.com"
> *Date : *jeudi 30 juillet 2020 à 08:44
> *À : *vpp-dev
> *Objet : *Re: [vpp-dev]: Trouble shooting low bandwidth of memif in
Looks like the image is not visible. Resending the topology diagram for
reference.
[image: iperf_memif.png]
On Thu, Jul 30, 2020 at 11:44 AM Rajith PR via lists.fd.io wrote:
> Hello Experts,
>
> I am trying to measure the performance of memif interface and getting a
> very l
Hello Experts,
I am trying to measure the performance of memif interface and getting a
very low bandwidth(652Kbytes/sec). I am new to performance tuning and any
help on troubleshooting the issue would be very helpful.
The test topology i am using is as below:
Basically, I have two lxc contain
nks,
Rajith
On Tue, Jul 7, 2020 at 6:28 PM Rajith PR via lists.fd.io wrote:
> Hi Benoit,
>
> I have all those fixes. I had reported this issue (27407), the others i
> found during my tests and added barrier protection in all those places.
> This ASSERT seems to be not due to pool
al Message-
> > From: vpp-dev@lists.fd.io On Behalf Of Rajith PR
> via
> > lists.fd.io
> > Sent: mardi 7 juillet 2020 14:11
> > To: vpp-dev
> > Subject: [vpp-dev]: ASSERT in load_balance_get()
> >
> > Hi All,
> >
> > During our scale testing of
Hi All,
During our scale testing of routes we have hit an ASSERT in *load_balance_get()
. * From the code it looks like the lb_index(148) referred to is already
returned to the pool by the main thread causing the ASSERT in the worker.
The version in *19.08. *We have two workers and a main thread.
Hi All,
During our scale testing of routes we have hit an ASSERT in *load_balance_get()
. * From the code it looks like the lb_index(148) referred to is already
returned to the pool by the main thread causing the ASSERT in the worker.
The version in *19.08. *We have two workers and a main thread.
Hi All,
We are seeing *ASSERT (vec_len (hw_if0->hw_address) == 6);* being hit in
*arp_mk_reply*() . This is happening on *19.08. *
We are having worker threads and a main thread.
As such , the hw_if0 appears to be valid(the pointer and content).
*But the length of the vector is 15. *
I have atta
t; Please refer to
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html#
> <https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html>
>
>
>
> *From:* vpp-dev@lists.fd.io *On Behalf Of *Rajith
> PR via lists.fd.io
> *Se
Hi All,
While during scale tests with large numbers of routes, we occasionally hit
a strange issue in our container. The *vpp process became unresponsive*,
after attaching the process to gdb we could see the *vpp_main thread is
stuck on a specific function*. Any pointer to debug such issues would
ed as mp_safe, so we could create a fixed-size load balance pool to
> prevent runtime reallocation, but it would waste memory and impose a
> maximum size.
>
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io On Behalf Of Rajith PR
> > via l
ern –
> copying Neale for an opinion.
>
>
>
> D.
>
>
>
> *From:* vpp-dev@lists.fd.io *On Behalf Of *Rajith
> PR via lists.fd.io
> *Sent:* Tuesday, June 2, 2020 10:00 AM
> *To:* vpp-dev
> *Subject:* [vpp-dev] SEGMENTATION FAULT in load_balance_get()
>
>
Hello All,
In *19.08** VPP version* we are seeing a crash while accessing the
*load_balance_pool* in *load_balanc_get*() function. This is happening
after *enabling worker threads*.
As such the FIB programming is happening in the main thread and in one of
the worker threads we see this crash.
Als
Another solution is to redirect the traffic from punt node to your feature
node. Here you can match on packets of interest and send them to interfere
output node.
Thanks,
Rajith
On Sat 9 May, 2020, 3:43 PM Mrityunjay Kumar, wrote:
> which vpp version are you heading? If you r using 19.05 or les
Hello Team,
After moving from 17.04 to 19.01 VPP version we are observing huge increase
in memory requirement(VIRT, SHR) for vpp_main process. Is this expected?
PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
*138 root 20 0 19.251g 1.461g 283400 R 14.6 4.7 0:
Hello Team,
During our integration with VPP stack we have found a couple of problems in
VNET infra and would like to seek your help in resolving these:-
1. Is there any way to disable a hardware interface( Eg. memif interface
or a host
interface). vnet_hw_interface_t not vnet_hw_interface_flags
Hello Team,
During integration of our software with VPP 19.08 we have found that ipv6
neighbor does not get discovered on first sw_if_index on which ipv6 is
enabled.
On further analysis we found that, it is due to radv_info->mcast_adj_index
being checked against "0" in the following code :-
Func
47 matches
Mail list logo