Hi,
when using config
unix { interactive cli-listen /run/vpp/cli.sock gid 0 } cpu { main-core 1 }
dpdk { dev :05:00.1 } *
* vpp# set int state TenGigabitEthernet5/0/1 up
vpp# set int ip addr TenGigabitEthernet5/0/1 192.168.1.1/24
vpp# ping 192.168.1.254 repeat 6 verbose
Source address: 192.
Thank you Dave,
The drop is working perfectly
But the other path - to let the packet continue in the 'normal' path is
broken.
How to set the next0 for the 'normal' path ?
The sample plugin (which my plugin is based on) set it to INTERFACE_OUTPUT -
which is not suitable for this case.
Btw - h
Hi Shashi,
This can’t possibly be part of 17.10 and 18.01 since it was only merged this
week on master. Probably you need to clean your repo, rebuild and reinstall
debs.
Florin
> On Mar 7, 2018, at 7:44 PM, Shashi Kant Singh wrote:
>
> Hi
>
> When I try to start vpp with release 17:10 and
For DMM project:
- Commit initial DMM framework
- Commit documentations for APIs, developer guides, etc.
- Plug into CSIT
- Integrate VPP host stack/TLDK
- DMM data-plane EAL on VPP L3
- Performance optimization
Thanks
George
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of
Ray,
Thanks for calling attention to this. I've taken the feedback I've
received thus far and added it to:
https://docs.google.com/presentation/d/1jUoqWt9tbMaiUsLE2IaUSSbTcjKN6c3cQ-PXID5nz0k/edit?usp=sharing
It's a rough cut, let's work it together as a community overnight and at
the TSC meetin
Going back to the original discussion:
I have VPP working now on aarch64 with Mellanox Card.
Disclaimer:
$ uname -r
4.10.0-28-generic
Ubuntu 16.04.4 LTS (GNU/Linux 4.10.0-28-generic aarch64)
I am aware that the published supported version for mellanox with dpdk
is 4.14+
I am listing down my step
Hi
When I try to start vpp with release 17:10 and 18:01,. I am getting following
TLS error
What could be the issue?
Regards
Shashi
PS: This worked fine for previous releases.
# make run-release STARTUP_CONF=../startup.conf
vlib_plugin_early_init:356: plugin path
/bng5/shashi-2/vpp2/vpp/build
Hi Shaun,
Glad to see you’re experimenting with the proxy code. Note however that the
implementation is literally just a proof of concept. We haven’t spent any time
optimizing it. Nonetheless, it would be interesting to understand why this
happens.
Does the difference between apache and vpp
Hi,
We are doing some basic testing using the TCP proxy app in stable-18.01 build.
When I proxy a single HTTP request for a 20MB file through to an Apache web
server I get less than one-tenth of the throughput compared to using a Linux
TCP Proxy (Apache Traffic Server) on exactly the same setup
“show run” will probably show a very small vector size.
If so, look at src/vlib/unix/input.c:linux_epoll_input(…). 10ms is exactly the
epoll_pwait timeout value.
D.
From: vpp-dev@lists.fd.io On Behalf Of Sara Gittlin
Sent: Wednesday, March 7, 2018 2:02 PM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@li
Could you try again with taps and large rings?
create tap rx-ring-size 4096 tx-ring-size 4096
create tap rx-ring-size 4096 tx-ring-size 4096
Configure then the two taps in your namespaces and run iperf again.
Hope this helps,
Florin
> On Mar 7, 2018, at 11:01 AM, Sara Gittlin wrote:
>
> Th
Thank you Hau
i tested w iperf got similar results. I cannot find iperf2. Anyway ns to ns
directly without vpp is perfect 50 gbps throughput and 10us latency. Tested
w iperf3. This is very bothering since we decided to go w vpp instead of
ovs
Thanks in advance
-Sara
בתאריך 6 במרץ 2018 20:00, "H
Looking to confirm worker_thread creation/deletion capabilities.
./src/vpp/conf/startup.conf descriptions suggest we must define the number of
worker_threads up front. I am looking to understand if that is correct, or if
we can add/remove worker_threads from a live system over time as workloads
File vnet/interface.h:
*vnet_interface_counter_type_t* has:
...
VNET_INTERFACE_COUNTER_MPLS = 8,
VNET_N_SIMPLE_INTERFACE_COUNTER = 9,
...
In vnet/interface.c function vnet_interface_init
im->sw_if_counters[...].name is initialized for all counters except
VNET_INTERFACE_COUNTER_MPLS
In vnet/i
Hi Lollita
adj_nbr_tables is the database that stores the adjacencies that represent the
peers attached to on a given link. It is sized (perhaps overly) to accommodate
a large segment on multi-access link. For your p2p GTPU interfaces, you could
scale it down, since there is only ever one peer
Dear Avi,
Yes, if you decide to drop b1, set next1 / error1 in the obvious way.
The macros vlib_validate_buffer_enqueue_x[2|4] sort out the various incorrect
speculative enqueue / 2 or 4 pkts going to different successor node cases.
Simply set (nextN, errorN) as desired and let the boilerplate
Thank you Dave - this is very helpful
Please see comments inline
> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Dave
> Barach
> Sent: Wednesday, 07 March, 2018 3:20 PM
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP - mechanism to drop pa
Add an arc from your node to the "error-drop" node, set next0 =
MYNODE_NEXT_ERROR and b0->error = node->errors[SOME_ERROR].
Please use the standard dual/single - or quad/single - loop code pattern to
walk the incoming vector of buffer indices. You will hate your life if you try
to code the vect
Hi all,
for those of you using in some fashion the acl-plugin code, wanted to
get your eyes on this in-the-works patch:
https://gerrit.fd.io/r/#/c/9689/
as well as get your opinion on the following:
(1) should I KEEP the default as it is now (which is to retain the
sessions which are already cr
Hi,
I'm implementing a simple policy plugin , below is the pseudo-code
Go over the packets vector
While (there are packets to process )
{
Check if a packet match a specific rule
If yes - set the out-interface for the packet to be transmitted
Else - drop packet
}
2 Question -
1. is there
Hi Team,
Have a couple of questions on worker threads and handoff between threads.
Each lcore has one worker thread associated with it, and assuming that
RSS is not used, interfaces are associated with lcores, and thus with
threads.
--
ID Name TypeLWP Sched Policy (Pr
Hi,
We have encounter performance issue on batch adding 1 GTPU
tunnels and 1 routes each taking one gtpu tunnel interface as nexthop via
API.
The effect is like executing following command:
create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 1
encap-vrf
22 matches
Mail list logo