It is likely that you are missing memAccess=’shared’
https://fdio-vpp.readthedocs.io/en/latest/usecases/vhost/xmlexample.html#:~:text=%3Ccell%20id%3D%270%27%20cpus%3D%270%27%20memory%3D%27262144%27%20unit%3D%27KiB%27%20memAccess%3D%27shared%27/%3E
From: on behalf of Benjamin Vandendriessche
Re
I bet you didn’t limit the number of API trace entries. Try limit the number of
API trace entries that VPP keeps with nitems and give it a reasonable number.
api-trace {
on
nitems 65535
}
Steven
From: on behalf of "efimochki...@
Sunil is using dpdk vmxnet3 driver. So he doesn’t need to load VPP native
vmxnet3 plugin. Gdb dpdk code to see why it returns -22 when VPP adds the NIC
to dpdk.
rte_eth_dev_start[port:1, errno:-22]: Unknown error -22
Steven
From: on behalf of Guangming
Reply-To: "vpp-dev@lists.fd.io"
Da
Did you try
vppctl show log
Steven
From: on behalf of "Tripathi, VinayX"
Reply-To: "vpp-dev@lists.fd.io"
Date: Saturday, February 4, 2023 at 4:19 AM
To: "vpp-dev@lists.fd.io"
Cc: "Ji, Kai"
Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message
Hi Team ,
Any suggestion wou
Type
show lacp details
to see if the member interface that is not forming the bundle receives and
sends LACP PDUs.
Type
show hardware
to see if both member interfaces have the same mac address.
From: on behalf of Eyle Brinkhuis
Reply-To: "vpp-dev@lists.fd.io"
Date: Monday, Dece
In addition, do
1. show hardware
The bond, eth1/0, and eth2/0 should have the same mac address.
2. show lacp details
Check these statistics for the interface that is not forming the bond
Good LACP PDUs received: 13
Bad LACP PDUs received: 0
LACP PDUs sent: 14
last LACP PDU receive
Use “virsh dumpxml” to check the output to see if you have memAccess=share as
below
Steven
From: on behalf of suresh vuppala
Reply-To: "vpp-dev@lists.fd.io"
Date: Friday, October 21, 2022 at 5:23 PM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] #vpp-dev No packets generated from Vh
Your Qemu command to launch the VM is likely missing the hugepage or share
option.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22062): https://lists.fd.io/g/vpp-dev/message/22062
Mute This Topic: https://lists.fd.io/mt/94432596/21656
Mute #vpp-d
Can you provide the topology, configurations, and steps to recreate this crash?
Steven
From: on behalf of Chinmaya Aggarwal
Reply-To: "vpp-dev@lists.fd.io"
Date: Wednesday, July 13, 2022 at 4:07 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] VPP crashing if we configure srv6 policy with
Please check to make sure the interface can ping to each other prior to adding
them to the bond. Type “show lacp details” to verify VPP receives LACP PDUs
from each other and the state machine.
Steven
From: on behalf of Chinmaya Aggarwal
Reply-To: "vpp-dev@lists.fd.io"
Date: Tuesday, June 2
It is related to memoryBacking, missing hugepages, or missing shared option.
What does your qemu launch command look like?
Steven
From: on behalf of Chinmaya Aggarwal
Reply-To: "vpp-dev@lists.fd.io"
Date: Thursday, July 14, 2022 at 3:31 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Memory
Please try debug image and provide a sane back trace.
Steven
From: on behalf of Chinmaya Aggarwal
Reply-To: "vpp-dev@lists.fd.io"
Date: Thursday, July 21, 2022 at 4:42 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] VPP crashes when lcp host interface is added in network
bridge
Hi,
As per
Pragya,
UU-Flood stands for Unknown Unicast Flooding. It does not flood multicast or
broadcast packets. You need “Flooding” on to flood multicast/broadcast packets.
Steven
From: on behalf of Pragya Nand Bhagat
Reply-To: "vpp-dev@lists.fd.io"
Date: Monday, August 1, 2022 at 2:59 AM
To: "vpp-
Folks,
In case you don’t already know, there is a tag called Fixes in the commit
message which allows one to specify if the current patch fixes a regression.
See an example usage in https://gerrit.fd.io/r/c/vpp/+/34212
When you commit a patch which fixes a known regression, please make use of t
Srikanth ,
You are correct that dpdk bonding has been deprecated for a while. I don’t
remember since when. The performance of VPP native bonding when compared to
dpdk bonding is about the same. With VPP native bonding, you have an additional
option to configure LACP which was not supported when
Chetan,
I have a patch in gerrit a long time ago and I just rebased it to the latest
master
https://gerrit.fd.io/r/c/vpp/+/30866
Please feel free to test it thoroughly and let me know if you encounter any
problem or not.
Steven
From: on behalf of chetan bhasin
Date: Tuesday, September 14, 2
I set up the same bonding with dot1q and subinterface configuration as given,
but using tap interface to connect to Linux instead. It works just fine. I
believe the crash was due to using a custom plugin which is cloned from VPP
DPDK plugin to handle the Octeon-tx2 SoC. When bonding gets the buf
Sudhir,
It is an error topology/configuration we don’t currently handle. Please try
this and report back
https://gerrit.fd.io/r/c/vpp/+/32292
The behavior is container-1 will form one bonding group with container-2. It is
with either BondEthernet0 or BondEthernet1.
Steven
From: on behalf of
Your commit subject line is missing a component name. The commit comment is
missing “Type:”.
Steven
From: on behalf of "hemant via lists.fd.io"
Reply-To: "hem...@mnkcg.com"
Date: Tuesday, April 27, 2021 at 12:56 PM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] lawful intercept
Newer rev
VPP implements both active and passive modes. The default operation mode is
active. The current setting for the port, active/passive, can be inferred from
the output of show lacp. In the active state column, I see act=1 for all 4
ports.
The output of the show bond command looks like VPP is alre
“make build” from the top of the workspace will generate the debug image for
you to run gdb.
Steven
From: on behalf of Yaser Azfar
Date: Wednesday, January 6, 2021 at 1:21 PM
To: "Benoit Ganne (bganne)"
Cc: "fdio+vpp-...@groups.io"
Subject: Re: [vpp-dev] VPP Packet Generator and Packet Trace
Additionally, please figure out why carrier is down. It needs to be up.
Intel 82599
carrier down
Steven
From: on behalf of Dave Barach
Date: Sunday, December 20, 2020 at 4:58 AM
To: 'Merve' , "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] Blackholed packets after forwarding interface out
show interface displays the interface’s admin state.
show hardware displays the interface’s operational link state.
The link down is likely caused by memif configuration error. Please check your
configuration on both sides to make sure they match. Some tips to debug,
show memif
set logging class m
Can you check the output of show hardware? I suspect the link is down for the
corresponding memif interface.
Steven
From: on behalf of "tahir.a.sangli...@gmail.com"
Date: Friday, December 11, 2020 at 1:14 PM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] #vpp #vpp-memif #vppcom
in our applica
Right, it should not crash. With the patch, the VM just refuses to come up
unless we raise the queue support.
Steven
On 12/9/20, 10:24 AM, "Benoit Ganne (bganne)" wrote:
> This argument in your qemu command line,
> queues=16,
> is over our current limit. We support up to 8. I can
Eyle,
This argument in your qemu command line,
queues=16,
is over our current limit. We support up to 8. I can submit an improvement
patch. But I think it will be master only.
Steven
From: Eyle Brinkhuis
Date: Wednesday, December 9, 2020 at 9:24 AM
To: "Steven Luong (sluong)"
Cc: "Benoit Ga
Eyle,
Can you also show me the qemu command line to bring up the VM? I think it is
asking for more than 16 queues. VPP supports up to 16.
Steven
On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io on behalf of Benoit Ganne (bganne)
via lists.fd.io" wrote:
Hi Eyle, could you share the associated .
When you create the bond interface using either lacp or xor mode, there is an
option to specify load-balance l2, l23, or l34 which is equivalent to linux
xmit_hash_policy.
Steven
From: on behalf of "ashish.sax...@hsc.com"
Date: Sunday, December 6, 2020 at 3:24 AM
To: "vpp-dev@lists.fd.io"
S
Bonding cares less whether the traffic is unicast or multicast. It just hashes
the packet header and selects one of the members as the outgoing interface. The
only bonding mode which it replicates packets across all members is when you
create the bonding interface to do broadcast which you didn’
Please use gdb to provide a meaningful backtrace.
Steven
From: on behalf of Eyle Brinkhuis
Date: Wednesday, December 2, 2020 at 5:59 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Vpp crashes with core dump vhost-user interface
Hi all,
In our environment (vpp 20.05.1, ubuntu 18.04.5, networ
You have 17 format tags, but you pass 18 arguments to the unformat function. Is
that intentional?
Steven
From: on behalf of "hemant via lists.fd.io"
Reply-To: "hem...@mnkcg.com"
Date: Friday, November 27, 2020 at 3:52 PM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] unformat fails processing
It is not supported.
From: on behalf of Venkatarao M
Date: Monday, July 20, 2020 at 8:35 AM
To: "vpp-dev@lists.fd.io"
Cc: praveenkumar A S , Lokesh Chimbili
, Mahesh Sivapuram
Subject: [vpp-dev] How to do Bond interface configuration as
fail_over_mac=active in VPP
Hi all,
We are trying bon
ists.fd.io>> wrote:
Hi Steven,
Please note that per this proposition, https://lkml.org/lkml/2020/7/4/229,
slave must be avoided but master can be kept.
Maybe master/member or master/secondary could be options too.
Jerome
Le 14/07/2020 18:32, « vpp-dev@lists.fd.io<mailto:vpp-dev@lists
I am in the process of pushing a patch to replace master/slave with
aggregator/member for the bonding.
Steven
On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io on behalf of Dave Barach via
lists.fd.io"
wrote:
+1, especially since our next release will be supported for a year, and API
name chan
Inline.
From: on behalf of "sadhanakesa...@gmail.com"
Date: Thursday, July 2, 2020 at 9:55 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Userspace tcp between two vms using vhost user interface?
Hi,
there seems like lot of ways to setup userspace tcp with vpp, hoststack , with
and without
E: [vpp-dev] Need help with setup.. cannot ping a VPP interface.
+check hardware addresses with “show hardware”, to make sure you’ve configured
the interface which is actually connected to the peer system / switch...
HTH... Dave
From: vpp-dev@lists.fd.io On Behalf Of steven luong via
lists.fd
Please correct the subnet mask first.
L3 10.1.1.10/24. <-- system A
inet 10.1.1.11 netmask 255.0.0.0 broadcast 10.255.255.255 <--- system B
Steven
From: on behalf of Manoj Iyer
Date: Friday, June 12, 2020 at 12:28 PM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Need help with setup.. c
Vmxnet3 is a paravirtualized device. I could be wrong, it does not appear it
supports adding virtual mac address. This error returns from dpdk indicates
just that.
Jun 8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: vrrp_vr_transition_vmac:120:
Adding virtual MAC address 00:00:5e:00:01:01 on hard
The problem is the aforementioned commit added a call to invoke
vnet_hw_interface_set_flags() in the worker thread. That is no can do. We are
in the process of reverting the commit.
Steven
On 5/29/20, 10:02 AM, "vpp-dev@lists.fd.io on behalf of Elias Rudberg"
wrote:
Hello,
We now g
First, your question has nothing to do with bonding. Whatever you are seeing is
true regardless of bonding configured or not.
Show interfaces displays the admin state of the interface. Whenever you set the
admin state to up, it is displayed as up regardless of the physical carrier is
up or down
tapcli has been deprecated few releases ago. It has been replaced by virtio
over tap. The new cli is
create tap …
Steven
From: on behalf of "mauricio.solisjr via lists.fd.io"
Reply-To: "mauricio.soli...@tno.nl"
Date: Tuesday, April 14, 2020 at 3:55 AM
To: "vpp-dev@lists.fd.io"
Subject: [vp
Will do the same on the other two branches later today if we are all happy
about the fix on master...
--a
On 6 Apr 2020, at 17:03, steven luong via lists.fd.io
wrote:
Folks,
It looks like jobs for all branches, 19.08, 20.01, and master, are failing due
to this inspect.py error. Could somebody
ing@lists.fd.io>>
wrote:
Andrew submitted a changeset that backs out the updated Sphinx package. I am
building the target 'test-doc' to try to learn the root cause.
On Mon, Apr 6, 2020 at 11:03 AM steven luong via
lists.fd.io<http://lists.fd.io>
mailto:cisco@lists.
Folks,
It looks like jobs for all branches, 19.08, 20.01, and master, are failing due
to this inspect.py error. Could somebody who is familiar with the issue please
take a look at it?
18:59:12 Exception occurred:
18:59:12 File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
18:59:12
Ping command has been moved to a separate plugin. You probably didn’t have the
ping plugin enable in your startup.conf. Please add the ping plugin to your
startup.conf. Something like this will do the trick.
plugins {
…
plugin ping_plugin.so { enable }
}
From: on behalf of "mauricio.solisj
From: on behalf of "ravinder.ya...@hughes.com"
Date: Tuesday, February 25, 2020 at 7:27 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error
1" #vmxnet3 #ipsec
[Edited Message Follows]
VPP IPsec responder on ESXI VM RHEL 7.6
Is there
It works for me although I am on ubuntu 1804 VM. Your statement is unclear to
me if your problem is strictly related to more than 4 rx-queues or not when you
say
“but when i try to associate more than 4 num-rx-queues i get error”
Does it work fine when you reduce the number of rx-queues less tha
So you now know what command in the dpdk section that dpdk doesn’t like.
Try adding “log-level debug” in the dpdk section of startup.conf to see if you
can find more helpful messages in “vppctl show log” from dpdk why it fails to
probe the NIC.
Steven
From: on behalf of Gencli Liu <18600640...
It is likely a resource problem – when VPP requests more descriptors and/or
TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the
interface. There are few ways to figure out what the problem is.
1. Bypass VPP and run testpmd with debug options turned on, something like
DPDK bonding is no longer supported in 19.08. However, you can use VPP native
bonding to accomplish the same thing.
create bond mode active-backup load-balance l34
set interface state BondEthernet0 up
bond add BondEthernet0 GigabitEthernet1/0/0
bond add BondEthernet0 GigabitEthernet1/0/1
Steven
Using VPP+DPDK virtio to connect with VPP + vhost-user is not actively
maintained. I got it working couple years ago by committing some changes to the
DPDK virtio code. Since then, I’ve not been playing with it anymore. Breakage
is possible. I could spend a whole week on it to get it working aga
create interface virtio
Or just use memif interface. That is what it is built for.
Steven
From: on behalf of "mojtaba.eshghi"
Date: Monday, July 29, 2019 at 5:50 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside
host using vhost-vi
The debug CLI was replaced by
set logging class vhost-user level debug
Use show log to view the messages.
Did you configure 1GB huge on the container? It used to be that dpdk virtio
requires 1GB huge page. Not sure if it is still the case nowadays. If you use
VPP 19.04 or later, you could try VP
Packet drops due to “no available descriptors” for vhost-user interface is
extremely likely when doing performance test with qemu’s default vring queue
size. You need to specify the vring queue size of 1024, default is 256, when
you bring up the VM. The queue size can be specified either via XML
Yes on both counts.
From: on behalf of Zhiyong Yang
Date: Wednesday, June 12, 2019 at 10:33 PM
To: "Yang, Zhiyong" , "Steven Luong (sluong)"
, "vpp-dev@lists.fd.io" , "Carter,
Thomas N"
Cc: "Kinsella, Ray"
Subject: Re: [vpp-dev] some questions about LACP(link bonding mode 4)
I mean, Is the
There is no limit on the number of slaves in a bonding group in VPP’s
implementation. I don’t know/remember how to select one port over another from
the spec without reading it carefully again.
Steven
From: "Yang, Zhiyong"
Date: Tuesday, June 11, 2019 at 11:09 PM
To: "vpp-dev@lists.fd.io" , "S
Clueless with useless tracebacks. Please hook up gdb and get the complete
human-readable backtrace.
Steven
From: on behalf of Mostafa Salari
Date: Wednesday, May 29, 2019 at 10:24 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting
address
Mostafa,
Vmxnet3 NICs are in the blacklist by default. Please specify the vmxnet3 pci’s
in the dpdk section of the startup.conf.
Steven
From: on behalf of Mostafa Salari
Date: Sunday, May 12, 2019 at 4:52 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Vpp 1904 does not recognize vmxnet3 int
Dear Anthony,
Please check the bond interface to see if the active slaves count has any
positive number using show bond. Since you didn’t configure LACP on VM2, I
believe you’ve not gotten any active slave in VPP. Your solution is to
configure a bond interface in VM2 using mode 4 (I believe) if
Anthony,
L3 address should be configured on the bond interface, not the slave interface.
If there is a switch in between VPP’s physical NICs and the VM, the switch
should be configured to do the bonding, not the remote VM. Use show bond to
check the bundle is created successfully between VPP an
to ethernet-input */
ethernet_set_rx_redirect (vnm, sif_hw, 1);
}
}
return 0;
}
when I switch the mode of bonding interface to l2,
the function(blue color code above) redirects all the members to ethernet-input
,
but when I switch it back to l3, all the members don't redirect
Are you using VPP native bonding driver or DPDK bonding driver? How do you
configure the bonding interface? Please include the configuration and process
to recreate the problem.
Steven
From: on behalf of "saint_sun 孙 via Lists.Fd.Io"
Reply-To: "saint_...@aliyun.com"
Date: Wednesday, October
DPDK is expecting UPT version > 0 and ESXi 6.5/6.7 seems to be returning UPT
version 0, when it was queried, which is not a supported version. I am using
ESXi 6.0 and it is working fine. You could try ESXi 6.0 to see if it helps.
Steven
From: on behalf of truring truring
Date: Monday, October
Stephen,
Fix for vhost
https://gerrit.fd.io/r/14920
I'll take care of vmxnet3 later.
Steven
On 9/20/18, 10:57 AM, "vpp-dev@lists.fd.io on behalf of Stephen Hemminger"
wrote:
Why is there not a simple link on FD.io developer web page to report bugs.
Reporting bugs page talks abo
, "vpp-dev@lists.fd.io"
Subject: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface
Hi, Vijay,
Sorry to ask dumb question, can you make sure the interface in your VM (either
Linux Kernel or DPDK) is “UP”?
Regards,
Yichen
From: on behalf of "steven luong via Lists
Aleksander,
I found the CLI bug. You can easily workaround with it. Please set the physical
interface state up first in your CLI sequence and it will work.
create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface
Aleksander,
This problem should be easy to figure out if you can gdb the code. When the
very first slave interface is added to the bonding group via the command “bond
add BondEthernet0 GigabitEthnerneta/0/0/1”,
- The PTX machine schedules the interface with the periodic timer via
lacp_schedule
This configuration is not supported in VPP.
Steven
From: on behalf of Aleksander Djuric
Date: Wednesday, August 15, 2018 at 12:33 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] LACP link bonding issue
In addition.. I have tried to configure LACP in dpdk section of vpp
startup.conf.. an
Aleksander,
The problem is LACP periodic timer is not running as shown in your output. I
wonder if lacp-process is launched properly or got stuck. Could you please do
show run and check on the health of lacp-process?
periodic timer: not running
Steven
From: on behalf of Aleksander Djuri
I forgot to ask if these 2 boxes’ interfaces are connected back to back or
through a switch.
Steven
From: on behalf of "steven luong via Lists.Fd.Io"
Reply-To: "Steven Luong (sluong)"
Date: Tuesday, August 14, 2018 at 8:24 AM
To: Aleksander Djuric , "vpp-dev@li
Aleksander
It looks like the LACP packets are not going out to the interfaces as expected
or being dropped. Additional output and trace are needed to determine why.
Please collect the following from both sides.
clear hardware
clear error
wait a few seconds
show hardware
show error
show lacp d
Vijay,
From the show output, I can’t really tell what your problem is. If you could
provide additional information about your environment, I could try setting it
up and see what’s wrong. Things I need from you are exact VPP version, VPP
configuration, qemu startup command line or the XML startu
Ravi,
I supposed you already checked the obvious that the vhost connection is
established and shared memory has at least 1 region in show vhost. For traffic
issue, use show error to see why packets are dropping. trace add
vhost-user-input and show trace to see if vhost is getting the packet.
S
Ravi,
I only have an SSE machine (Ivy Bridge) and DPDK is using ring mempool as far
as I can tell from gdb. You are using AVX2 which I don't have one to try it to
see whether Octeontx mempool is the default mempool for AVX2. What do you put
in dpdk in the host startup.conf? What is the output f
Ravi,
In order to use dpdk virtio_user, you need 1GB huge page.
Steven
On 6/5/18, 11:17 AM, "Ravi Kerur" wrote:
Hi Steven,
Connection is the problem. I don't see memory regions setup correctly.
Below are some details. Currently I am using 2MB hugepages.
(1) Create vh
Ravi,
Do this
1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user on".
2. Bring up the container with the vdev virtio_user commands that you have as
before
3. show vhost-user in the host and verify that it has a shared memory region.
If not, the connection has a problem.
Ravi,
VPP only supports vhost-user in the device mode. In your example, the host, in
device mode, and the container also in device mode do not make a happy couple.
You need one of them, either the host or container, running in driver mode
using the dpdk vdev virtio_user command in startup.conf.
77 matches
Mail list logo