Hi guys,
Why the assert appear on version 17.07, is there anything wrong with my
configuration?
DBGvpp# mpls local-label add eos 33 ip4-lookup-in-table 1
DBGvpp# mpls local-label add non-eos 34 mpls-lookup-in-table 0
DBGvpp# mpls tunnel add via 2.1.1.1 host-eth1 out-label 1023
DBGvpp# set int s
Hi guys,
How can I configure label pop in Egress node?
That's my configure in my vpp , and the label didn't pop :
set interface mpls host-eth1 enable
set interface mpls host-eth0 enable
mpls local-label 33 ip4-lookup-in-table 0
mpls local-label 33 via 2.1.1.2 host-eth0
mpls loc
Hi guys ,
Use case that I am trying to achieve is :
IP packet from IP should be forwarded to certain node.
Looking at different mail chains discussed on vpp forum I thought
classifier can be used for same.
Hence tried below configuration :
My setup :
---
| ||
If it's a private vrf label, you can try like this:
mpls local-label add eos 123 ip4-lookup-in-table 20
-- Original --
From: "薛欣颖";
Date: Tue, May 23, 2017 11:43 AM
To: "vpp-dev";
Subject: [vpp-dev] MPLS LABEL
Hi guys,
I can configure mpls out-label
Hi guys,
I can configure mpls out-label with "ip route add 1.1.1.1/24 via 1.1.2.1
GigE0/0/0/0 out-label 123".
How can I configure mpls local-label?
Thanks,
xyxue
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-d
Well, I get several failures. Fedora 25. I don't know if this is a valid
choice or not. I did "make wipe-release" followed by "make test-all".
Burt
==
ERROR: MPLS eBGP PIC edge convergence
I am try to add some test code in vpp. It works well in single-thread mode, but
I am confused when I want to use it in a multi-thread with worker threads
mode.I will use pool_get in a work thread(different work will use different
pool), and pool_put in the main thread.
When use pool_get to get
Hi folks,
There is a pool_get and pool_put macro based on vec structure, quite a few
code rely on that, however, I have a question regarding about the thread
safety of these two APIs, I do not found any comments for them.
Since pool_get may resize the vec, is there any issue if another worker
thr
I tried running locally Fedora 25 (my Centos was on my T400 laptop which
is too old to run the tests because it is missing SSE4.2.)
I believe that typically a "make test" should not depend upon installed
code. But I had to uninstall an old vpp-plugins package. It was obvious
because they were cal
On 05/22/2017 08:28 AM, Ed Warnicke wrote:
Do you have any insight on this?
Ed
I am aware of it but I don't have any insight at the moment.
I am setting up an environment where I can run csit locally to do some
local verification.
Also, I am searching the Centos and RHEL lists for known py
I just disabled “make test” on Centos. We cannot continue like this. We can
easily put it back after problem is fixed.
On 22 May 2017, at 14:28, Ed Warnicke
mailto:hagb...@gmail.com>> wrote:
Do you have any insight on this?
Ed
On Mon, May 22, 2017 at 1:29 AM, Klement Sekera -X (ksekera - P
Need help with the DPDK crypto plugin,
After I moved to the DPDK Crypto plugin, and built it with:
*make vpp_uses_dpdk_cryptodev_sw=yes build*
On running VPP faced the following error:
*load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)*
*load_one_plugin:142:
/vpp/buil
Do you have any insight on this?
Ed
On Mon, May 22, 2017 at 1:29 AM, Klement Sekera -X (ksekera - PANTHEON
TECHNOLOGIES at Cisco) wrote:
>
> Hi,
>
> the centos python crash is known, but we're unsure about the root cause.
> Building newer python from source on centos vm makes the crashes go awa
Thanks Neale! Let me check the mfib code...
Thanks,
-nagp
On Mon, May 22, 2017 at 1:57 PM, Neale Ranns (nranns)
wrote:
>
>
> Hi Nagp,
>
>
>
> I’d recommend option 1, since the mechanisms to achieve it are already in
> place.
>
>
>
> To add a path to receive traffic for a multicast FIB entry is
Hi all,
Is there any plan to surpport ALG?
Regards,
Ewan
ewan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
Hi,
the centos python crash is known, but we're unsure about the root cause.
Building newer python from source on centos vm makes the crashes go away
so we're assuming that the (older) python itself might be the culprit,
since we haven't seen these on ubuntu at all.
Regarding the second crash -
Hi Nagp,
I’d recommend option 1, since the mechanisms to achieve it are already in place.
To add a path to receive traffic for a multicast FIB entry is much the same as
for unicast (i.e. see ip6_create_mfib_with_table_id() where we add the special
IPv6 ND entries). The additional configuration
Hi folks,
Not sure if it's just me, but some CI tests have suddenly start failing
for me. Is it just me or a wider problem?
Ray K
CENTOS
https://jenkins.fd.io/job/vpp-verify-master-centos7/5568/
19:52:10 IP Multicast Signabash: line 1: 21723 Segmentation fault
(core dumped) python run_test
Hi all,
Why the call stack is so deep on version 17.04, is it ok?
(gdb) bt
#0 0x7fac14a34a37 in __GI_epoll_pwait (epfd=3, events=0x7fabd71f27d8,
maxevents=256, timeout=timeout@entry=0, set=set@entry=0x7fac16e8c180
)
at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
#1 0x7fac16c71af8
Hi,
Thanks.
Regards,
Ewan
yug...@telincn.com
From: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)
Date: 2017-05-22 12:55
To: yug...@telincn.com; otroan
CC: vpp-dev
Subject: RE: [vpp-dev] five tuple nat
Hi,
You probably use older VPP version, ICMP support for deterministic NAT w
I was wondering if any of the following options work:
* Have a dpo-receive assigned to a multicast address and use the ip
protocol type to punt packets to a custom node in graph
* Have a custom dpo type created and redirect packets directly from ip
lookup with a "fib_table_entry_special_dpo_add"
21 matches
Mail list logo