HI Chethan,
Your packet trace shows that packet data is all 0 and that’s why you are
running into l3 mac mismatch.
I am guessing something messed with IOMMU due to which translation is not
happening. Although packet length is correct.
You can try out AVF plugin to iron out where problem exists,
Hi,
One more finding related to intel nic and number of buffers (537600)
vpp branch driver card buffers Traffic Err
stable/1908 uio_pci_genric X722(10G) 537600 Working
*stable/1908* *vfio-pci* *XL710(40G)* *537600 * *Not Working* *l3 mac
mismatch*
stable/2001 uio_pci_genric X722(10G) 537600 Work
Hi,
We have two interfaces (ens8 and ens9). We made an entry of pci address of both
the interfaces in /etc/vpp/startup.conf and brought down the interfaces. On
restarting VPP, we can see vpp interfaces corresponding to both the interfaces
(GigabitEthernet0/8/0 and GigabitEthernet0/9/0). We now w
After taking patches related to ip4_mtrie.c we are not longer seeing the
issue related to /8 routes and default gateway.
Thanks a lot!
Chetan
On Tue, Feb 4, 2020 at 10:55 AM chetan bhasin
wrote:
> Thanks Neale for response. I will take a look.
>
> On Mon, Feb 3, 2020 at 2:58 PM Neale Ranns (nra
Hi Ben,
Thanks for your answer.
Now I think I found the problem, looks like a bug in
plugins/rdma/input.c related to what happens when the list of input
packets wrap around to the beginning of the ring buffer.
To fix it, the following change is needed:
diff --git a/src/plugins/rdma/input.c b/src
Hi Satya,
Why are you commenting out the group (gid) configuration? The complaint is that
vcl cannot connect to vpp’s binary api, so that may be part of the problem, if
the user running vcl cannot read the binary api shared memory segment.
You could also try connecting using private connectio
Pcap trace support lives in two places these days: ethernet-input, and
interface_output. Unless something is wrong, it should work on any interface
type.
See
https://fd.io/docs/vpp/master/gettingstarted/developers/vnet.html#pcap-rx-tx-and-drop-tracing
N-tuple trace classification works as Be
Hi Elias,
As the problem only arise with VPP rdma driver and not the DPDK driver, it is
fair to say it is a VPP rdma driver issue.
I'll try to reproduce the issue on my setup and keep you posted.
In the meantime I do not see a big issue increasing the rx-queue-size to
mitigate it.
ben
> -O
Hi Chris,
> Does a more recent version of VPP rely on
> either DPDK 19.08 or DPDK 19.11?
VPP 20.01 has just been released and uses DPDK 19.08.
> Does anyone have ideas on how I could use VPP, but capture packets at the
> DPDK layer (on Azure)?
I do not think we support that today, however you c
Hi,
We are seeing following error when we try to connect to VPP via VCL test client.
Is this a known issue?
startup file that we are using on VPP:
unix {
nodaemon
log /tmp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
# gid vpp
}
#api-segment {
# gid v
I want to create IPsec tunnel while the tunnel src IP interface is associated
with default router VR0. the IPsec port and the LAN interface is associated
with VR4.
For this purpose, I am trying to use " tx_table_id - the FIB id used after
packet encap" argument of the vl_api_ipsec_tunnel_if_add
Hi vpp-dev,
I'm working to get strongswan IKE working with VPP, I've found some code on
github that was doing this as PoC for vpp 18.10. The code is using vl_msg_*
calls, and handling various duties to manage this API connection.
I also see there is this vapi code in the current VPP. Would it b
Coverity run failed today.
Current number of outstanding issues are 1
Newly detected: 0
Eliminated: 1
More details can be found at
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#15426): ht
Hi Nitin,
https://github.com/FDio/vpp/commits/stable/2001/src/vlib
As per stable/2001 branch , the given change is checked-in around Oct 28
2019.
df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
Yes (branch vpp 20.01)
Thanks,
Chetan Bhasin
Hi,
Would it be appropriate to open a bug against 19.08 for this?
Thanks,
Chris.
> On Feb 16, 2020, at 5:57 PM, carlito nueno wrote:
>
> Hi Damjan,
>
> Sorry for the late reply. I tested it on v20.01 and this is now working.
>
> Thanks!
>
> On Fri, Sep 20, 2019 at 2:07 PM Damjan Marion wro
>> I am guessing DPDK-19.08 is being used here with VPP-19.08
Typo, dpdk-19.05 and not dpdk-19.08
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Nitin Saxena
> Sent: Monday, February 17, 2020 5:34 PM
> To: Damjan Marion
> Cc: chetan bhasin ; vpp-dev@lists.fd.io
> Subject:
Hi Damjan,
>> if you read Chetan’s email bellow, you will see that this one is already
>> excluded…
Sorry I missed that part. After seeing diffs between stable/1908 and
stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
visible git commit in dpdk plugin which is playing wi
thanks neal,
in this scenario:
h1 -- n1 -- n2 -- h2n1,n2: vpp routers.
h1,h2: hosts
1- when we remove gre tunnel [ or other virtual interfaces] 2- then adding new
sub interface [for connecting n1 to n2 ]
we can't ping from h1 to h2.
On Monday, February 17, 2020, 02:05:47 PM GMT+3:30
Thanks Damjan and Nikhil for your time.
I also find below logs via dmesg (Intel X710/XL710 )
[root@bfs-dl360g10-25 vpp]# uname -a
Linux bfs-dl360g10-25 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58
EST 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@bfs-dl360g10-25 vpp]# uname -r
3.10.0-957.5.1.
Please be more specific about what ‘doesn’t work’.
You script on n1 does:
#add gre tunnel
create gre tunnel src 200.1.2.1 dst 200.1.2.2
set interface state gre0 up
set interface ip address gre0 10.10.10.11/32
ip route add 2.1.1.0/24 via gre0
#del gre tunnel
set interface state gre
Dear Nitin,
if you read Chetan’s email bellow, you will see that this one is already
excluded…
Also, it will not be easy to explain how this patch blows tx function in dpdk
mlx5 pmd…
—
Damjan
> On 17 Feb 2020, at 11:12, Nitin Saxena wrote:
>
> Hi Prashant/Chetan,
>
> I would try following
Hi Prashant/Chetan,
I would try following change first to solve the problem in 1908
commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
Author: Damjan Marion
Date: Tue Mar 12 18:14:15 2019 +0100
vlib: don't use vector for keeping buffer indices in
Type: refactor
Change-Id: I72221b97
> On 17 Feb 2020, at 07:37, chetan bhasin wrote:
>
> Bottom line is stable/vpp 908 does not work with higher number of buffers but
> stable/vpp2001 does. Could you please advise which area we can look at ,as it
> would be difficult for us to move to vpp2001 at this time.
I really don’t have
23 matches
Mail list logo