I was just chasing a strange error that turned out to be related to
some code in src/vpp/stats/stat_segment.c where something went wrong
regarding statistics vectors for different threads (some kind of memory
corruption that ended up causing an infinite loop inside dlmalloc.c).
Then I saw the foll
Hi,
Thanks for information.
I am trying to configure l2 gre over ipsec transport mode.
Here are my startup.cfg files. Can you help check if my configuration is
correct or not?
r230 and r740 are two servers which are directly connected.
eth0 is the phy nic. host-veth1 is one endpoint of veth pa
Folks, and maybe Damjan in particular?,
I am trying to run VPP from within a Docker container using
the Ligato framework for head-banging. The head-banging part
is working. The VPP interface binding part, not so much.
>From what I can tell, VPP sees the PCI devices, but then grouses
that the /d
[Edited Message Follows]
Thanks Christian and Florin. I am trying without a foreach loop. Things are
looking good so far.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#14110): https://lists.fd.io/g/vpp-dev/message/14110
Mute This Topic: https://l
Thanks Christian and Florin. I am trying without a foreach loop. things are
looking good so far.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#14110): https://lists.fd.io/g/vpp-dev/message/14110
Mute This Topic: https://lists.fd.io/mt/34376308/216
Hi Ranadip,
Yes, the session layer updates all vms when it adds a new arc from
session_queue_node to whatever the transport wants to use for output. I don’t
remember now why we did things that way, but it may be that it’s not needed
anymore.
Florin
> On Oct 2, 2019, at 9:23 PM, Ranadip Das
Dear Chris and Ben,
This solved the issue for us. Many thanks for your help!
Best regards,
Elias
On Thu, 2019-10-03 at 11:55 +, Benoit Ganne (bganne) via
Lists.Fd.Io wrote:
> Chris is correct, rdma driver is independent from DPDK driver and as
> such is not aware of any DPDK config option.
Chris is correct, rdma driver is independent from DPDK driver and as such is
not aware of any DPDK config option.
Here is an example to create 8 rx queues:
~# vppctl create int rdma host-if enp94s0f0 name rdma-0 num-rx-queues 8
Best
Ben
> -Original Message-
> From: vpp-dev@lists.fd.io O
"create interface rdma" CLI has an num-rx-queues config
VLIB_CLI_COMMAND (rdma_create_command, static) = {
.path = "create interface rdma",
.short_help = "create interface rdma [name ]"
" [rx-queue-size ] [tx-queue-size ]"
" [num-rx-queues ]",
.function = rdma_create_command_fn,
};
More info after investigating further: the issue seems related to the
fact that the RDMA plugin is available in 19.08, which did not exist in
19.01. As a result, we no longer need the "make dpdk-install-dev
DPDK_MLX5_PMD=y DPDK_MLX5_PMD_DLOPEN_DEPS=n" complication when
building. The release notes f
As we are about to switch from VPP 19.01 to 19.08 we encountered a
problem with NAT performance. We try to use the same settings (as far
as possible) for 19.08 as we did for 19.01, on the same computer.
In 19.01 we used 11 worker threads in total, combined with "set nat
workers 0-6" so that 7 of t
11 matches
Mail list logo