Hi All,
I am trying out the vxlan tunnel in vpp.
Following is my configuration:
*vxlan-device1**VM1* vxlan-tun1(vni
10)*VPP*-vxlan-tun2(vni
20)*VM2*---
I followed one of the fd.io youtube demo, it is very simple, but it does
not work for me.
- Restart vpp
-
vppctl set int state GigabitEthernet13/0/0 up
-
vppctl set int ip address GigabitEthernet13/0/0 192.168.50.166/24
-
# vppctl show int addr
-
GigabitEthernet
Feel free to ignore this. For some reason I only got Chris’ email after sending
this one...
Florin
> On Nov 6, 2017, at 10:09 AM, Florin Coras wrote:
>
> Hi Vanessa,
>
> You can find a full log here[1] and what I suspect to be the culprit lower.
> Any idea why this is happening?
>
> Thanks
Hi Vanessa,
You can find a full log here[1] and what I suspect to be the culprit lower. Any
idea why this is happening?
Thanks,
Florin
13:29:49 [ERROR] Failed to execute goal
org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy-file (default-cli) on
project standalone-pom: Failed to deplo
Yes, it is a weird one. I am sure it has happened to me at least once, and
I gave up trying to analyze it. We can say that there is only one instance
of PKG_CHECK_MODULES in src/configure.ac. And often these macros like to
see plenty of '[' and ']' characters, and those are not present (you can
com
This looks like we are hitting the limit on storage at packagecloud.
I've manually trimmed some old packages, but we really really need to get
the script going that autotrims for us.
Vanessa,
How are we doing on that?
Ed
On Mon, Nov 6, 2017 at 9:17 AM Luke, Chris wrote:
> The post-merge jobs
The post-merge jobs are failing with errors like this:
16:04:05 [ERROR] Failed to execute goal
org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy-file (default-cli) on
project standalone-pom: Failed to deploy artifacts: Could not transfer artifact
io.fd.vpp:vpp-dpdk-dev:deb:deb:17.08-vpp2
On Mon, Nov 6, 2017 at 4:09 PM, Florin Coras wrote:
> Hi Lori,
>
> backup
> git clean -fdx
> make bootstrap
> make build
>
> Any luck?
>
Yes, and that's weird... the fresh git clone should be the same as the
result of `git clean -dfx`. The only reasonable explanation I have is that
I tried the b
Hi Lori,
backup
git clean -fdx
make bootstrap
make build
Any luck?
Florin
> On Nov 6, 2017, at 4:17 AM, Lori Jakab wrote:
>
> Hi,
>
> I see the exact same issue that is reported in a previous thread [0], which
> didn't have a solution. I wonder if anyone has more insight now.
>
> I did n
Ok now i provisioned 4 rx queues for 4 worker threads and yea all workers
are processing traffic, but the lookup rate has dropped, i am getting low
packets than when it was 2 workers.
I tried configuring 4 tx queues as well, still same problem (low packets
received compared to 2 workers).
Thank
Just 1, let me change it to 2 may be 3 and get back to you.
Thanks,
Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662
On Mon, Nov 6, 2017 at 7:48 AM, Dave Barach (dbarach)
wrote:
> How many RX queues did you provision? One per wo
How many RX queues did you provision? One per worker, or no supper...
Thanks… Dave
From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Monday, November 6, 2017 7:36 AM
To: Dave Barach (dbarach)
Cc: vpp-dev@lists.fd.io; John Marshall (jwm) ; Neale Ranns
(nranns) ; Minseok Kwon
Subject: R
Hi Dave,
As per your suggestion i tried sending different traffic and i could notice
that, 1 worker acts per port (hardware NIC)
Is it true that multiple workers cannot work on same port at the same time?
Thanks,
Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email
Hi,
I see the exact same issue that is reported in a previous thread [0], which
didn't have a solution. I wonder if anyone has more insight now.
I did not use Vagrant to set up a VM with the build environment. On an
Ubuntu 16.04 VM I clone VPP latest master, do a `make install-dep` and then
a `ma
Thanks Dave,
let me try it out real quick and get back to you.
Thanks,
Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662
On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach)
wrote:
> Incrementing / random src/dst addr/port
Incrementing / random src/dst addr/port
Thanks… Dave
From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Monday, November 6, 2017 7:06 AM
To: Dave Barach (dbarach)
Cc: vpp-dev@lists.fd.io; John Marshall (jwm) ; Neale Ranns
(nranns) ; Minseok Kwon
Subject: Re: multi-core multi-thread
Hi Dave,
Thanks for the mail
a "show run" command shows dpdk-input process on 2 of the workers but the
ip6-lookup process is running only on 1 worker.
What config should be done to make all threads process traffic.
This is for 4 workers and 1 main core.
Pasted output :
vpp# sh run
Thread 0 v
Have you verified that all of the worker threads are processing traffic?
Sufficiently poor RSS statistics could mean - in the limit - that only one
worker thread is processing traffic.
Thanks… Dave
From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Sunday, November 5, 2017 10:03 PM
To:
Hi Pedro,
Can you check the dmesg? If it is due to "out of memory".?
Regards,
Mohsin
From: vpp-dev-boun...@lists.fd.io on behalf of
PEDRO ANDRES ARANDA GUTIERREZ
Sent: Monday, November 6, 2017 11:27 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Crash when
Hi folks,
I'm complete new to fd.io. I wanted to get some hands-on experience, so I
just cloned the vpp repo and tried to create a VM with the following steps:
cd Devel/
git clone https://gerrit.fd.io/r/vpp
cd vpp/
cd build-root/vagrant
(export VPP_VAGRANT_NICS=1;vagrant up)
and I got the follow
Hi,
there might be one more API that needs update:
define sw_interface_details
{
[…]
/* Layer 2 address, if applicable */
u32 l2_address_length;
u8 l2_address[8];
[…]
}
VPP allows to specify MAC address (e.g. for TAP interfaces).
As u8 mac[6], but interface dump allows extended addresses.
Hi Ryota-san,
I glad it works now. I couple of comments, marked [nr], on your setup are
inline below.
Regards,
neale
-Original Message-
From: Ryota Yushina
Date: Monday, 6 November 2017 at 10:12
To: "Ni, Hongjun" , "Neale Ranns (nranns)"
, "vpp-dev@lists.fd.io"
Subject: RE: [vpp-dev
Hi Ewan,
It should never be invalid. These sorts of checks in the data-plane only serve
to reduce performance.
You should examine the VLIB nodes prior to the ipv6 lookup (multicast or
unicast) to determine which one is setting the FIB index incorrectly.
Regards,
neale
From: "yug...@telincn.com
To Hongjon-san, Neale-san
Thank you so much for your great help. This patch solved the problem!
ICMP packets were encapped and routed to target.
As your discussion, vpp crashed at accessing gtpu_tunnel soruce mac address
pointer(=null).
Below is just my memo.
I faced a new issue.
An decapped
24 matches
Mail list logo