Hi Neale,
I did some deeper investigations on the vrrp issue. What I observed is as
follows:
On one node1 the VRRP config is:
set interface state Ext-0 up
set interface ip address Ext-0 192.168.61.52/25
vrrp vr add Ext-0 vr_id 61 priority 200 no_preempt accept_mode 192.168.61.50
On the other no
On Thu, Jul 1, 2021 at 10:07 AM Damjan Marion wrote:
>
>
> > On 01.07.2021., at 16:12, Matthew Smith wrote:
> >
> >
> >
> > On Thu, Jul 1, 2021 at 6:36 AM Damjan Marion wrote:
> >
> >
> > > On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io
> wrote:
> > >
> > >> Yes, allowing dyna
[Edited Message Follows]
Subject:
#vpp #dpdk
Flow API questions/Hierarchical Queuing feature support
Hi,
I have a question about Flow API.
test flow [add|del|enable|disable] [index ] "
"[src-ip ] [dst-ip ] "
"[ip6-src-ip ] [ip6-dst-ip ] "
"[src-port ] [dst-port ] "
"[
[Edited Message Follows]
Subject:
#vpp #dpdk
Flow API questions/Hierarchical Queuing feature support
Hi,
I have a question about Flow API.
test flow [add|del|enable|disable] [index ] "
"[src-ip ] [dst-ip ] "
"[ip6-src-ip ] [ip6-dst-ip ] "
"[src-port ] [dst-port ] "
"[
> On 01.07.2021., at 16:12, Matthew Smith wrote:
>
>
>
> On Thu, Jul 1, 2021 at 6:36 AM Damjan Marion wrote:
>
>
> > On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io
> > wrote:
> >
> >> Yes, allowing dynamic heap growth sounds like it could be better.
> >> Alternatively..
Hi,
Thanks for all the replies.
We were trying to run the RA suppress on VPP 21.01 version , but still facing
the same issue. Earlier in the this thread there were discussions regarding
rewriting of the IPv6 RA code.
Do we have fix for this issue now in the latest release (21.06) or in any
devel
On Thu, Jul 1, 2021 at 6:36 AM Damjan Marion wrote:
>
>
> > On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io cisco@lists.fd.io> wrote:
> >
> >> Yes, allowing dynamic heap growth sounds like it could be better.
> >> Alternatively... if memory allocations could fail and somethin
Hi all,
I still don’t have success. This is the configuration I tried:
set interface state NCIC-1-v1 up
create host-interface name Vpp2Host
set interface state host-Vpp2Host up
ip table add 4093
create sub-interfaces host-Vpp2Host 4093
set interface state host-Vpp2Host.4093 up
set interface ip
We are using Centos 8.4.2105 and gcc version 8.4.1.
Also, we have downloaded and installed mellanox driver from below link : -
https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed
Driver Version : 5.3-1.0.0.1
Thanks and Regards,
Chinmaya Agarwal.
-=-=-=-=-=-=-=-=-=-=-=-
Links: Y
> On 01.07.2021., at 07:35, Pierre Louis Aublin
> wrote:
>
> diff --git a/build/external/packages/ipsec-mb.mk
> b/build/external/packages/ipsec-mb.mk
> index d0bd2af19..119eb5219 100644
> --- a/build/external/packages/ipsec-mb.mk
> +++ b/build/external/packages/ipsec-mb.mk
> @@ -34,7 +34,7 @@
Might be worth trying our native driver (rdma) instead of using dpdk…..
—
Damjan
> On 01.07.2021., at 11:07, Pierre Louis Aublin
> wrote:
>
> The"Unsupported PCI device 0x15b3:0xa2d6 found at PCI address :03:00.0"
> message disappears; however the network interface still doesn't show u
> On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io
> wrote:
>
>> Yes, allowing dynamic heap growth sounds like it could be better.
>> Alternatively... if memory allocations could fail and something more
>> graceful than VPP exiting could occur, that may also be better. E.g. if
>
From: Benoit Ganne (bganne)
Date: Thursday, 1 July 2021 at 11:35
To: Neale Ranns , Mechthild Buescher
, vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] next-hop-table between two FIB tables results in punt
and 'unknown ip protocol'
>> As 198.19.255.249 is the IP of host-Vpp2Host.4093, VPP interpr
What is your VPP dpdk config looks like in startup.conf? Esp. did you whitelist
the device? See
https://fd.io/docs/vpp/master/gettingstarted/users/configuring/startup.html#the-dpdk-section
Also, please share the output of 'show logs'.
Best
ben
> -Original Message-
> From: Pierre Louis A
>> As 198.19.255.249 is the IP of host-Vpp2Host.4093, VPP interprets it as
>> you want to deliver the packet locally instead of forwarding it. Try
>> changing it to:
>> ip route add 198.19.255.248/29 table 1 via 0.0.0.0 next-hop-table 4093
> 0.0.0.0/32 in any table is a drop. One cannot specify a
> Yes, allowing dynamic heap growth sounds like it could be better.
> Alternatively... if memory allocations could fail and something more
> graceful than VPP exiting could occur, that may also be better. E.g. if
> I'm adding a route and try to allocate a counter for it and that fails, it
> would b
The"Unsupported PCI device 0x15b3:0xa2d6 found at PCI address
:03:00.0" message disappears; however the network interface still
doesn't show up. Interestingly, vpp on the host also prints this
message, yet the interface can be used.
By any chance, would you have any clue on what I could tr
From: vpp-dev@lists.fd.io on behalf of Benoit Ganne
(bganne) via lists.fd.io
Date: Thursday, 1 July 2021 at 10:38
To: Mechthild Buescher , vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] next-hop-table between two FIB tables results in punt
and 'unknown ip protocol'
I think the issue is the way
Please try https://gerrit.fd.io/r/c/vpp/+/32965 and reports if it works.
Best
ben
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Pierre Louis
> Aublin
> Sent: jeudi 1 juillet 2021 07:36
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP on a Bluefield-2 smartNIC
>
> Dear
Yes, when enabling MLX5 driver in DPDK with VPP we use our rdma-core build.
Looks like when built with your toolchain, something broke.
Can you share your environment (distro, compiler)?
Best
ben
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Chinmaya
> Aggarwal
> Sent: j
Hi All,
Vpp1 ( docker )
---
vpp# show interface
Name IdxState MTU (L3/IP4/IP6/MPLS)
Counter Count
GigabitEthernet0/4/0 1 down 9000/0/0/0
gtpu_tunnel0 4 up 0/0/0/0
host-vpp2out
I think the issue is the way you populate the route:
ip route add 198.19.255.248/29 table 1 via 198.19.255.249 next-hop-table 4093
As 198.19.255.249 is the IP of host-Vpp2Host.4093, VPP interprets it as you
want to deliver the packet locally instead of forwarding it. Try changing it to:
ip route
Hi,
We were able to find a workaround for NASM installation issue. We deleted
nasm-2.14.02.tar.xz from /opt/vpp/build/external/downloads/ and executed "make
install-ext-deps" again. But, this time we see another issue : -
[722/1977] Generating rte_bus_vdev_def with a custom command
[723/1977]
Hi Ben,
Sorry, I sent the wrong output for the fib table - with the 'next-hop-table'
configuration it looks as follows:
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ]
epoch:0 flags:none locks:[default-route:1, ]
0.0.0.0/0 fib:0 index:0 locks:2
default-route refs:1 en
24 matches
Mail list logo