It is likely that you are missing memAccess=’shared’
https://fdio-vpp.readthedocs.io/en/latest/usecases/vhost/xmlexample.html#:~:text=%3Ccell%20id%3D%270%27%20cpus%3D%270%27%20memory%3D%27262144%27%20unit%3D%27KiB%27%20memAccess%3D%27shared%27/%3E
From: on behalf of Benjamin Vandendriessche
Re
I bet you didn’t limit the number of API trace entries. Try limit the number of
API trace entries that VPP keeps with nitems and give it a reasonable number.
api-trace {
on
nitems 65535
}
Steven
From: on behalf of "efimochki...@
Sunil is using dpdk vmxnet3 driver. So he doesn’t need to load VPP native
vmxnet3 plugin. Gdb dpdk code to see why it returns -22 when VPP adds the NIC
to dpdk.
rte_eth_dev_start[port:1, errno:-22]: Unknown error -22
Steven
From: on behalf of Guangming
Reply-To: "vpp-dev@lists.fd.io"
Da
Did you try
vppctl show log
Steven
From: on behalf of "Tripathi, VinayX"
Reply-To: "vpp-dev@lists.fd.io"
Date: Saturday, February 4, 2023 at 4:19 AM
To: "vpp-dev@lists.fd.io"
Cc: "Ji, Kai"
Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message
Hi Team ,
Any suggestion wou
Type
show lacp details
to see if the member interface that is not forming the bundle receives and
sends LACP PDUs.
Type
show hardware
to see if both member interfaces have the same mac address.
From: on behalf of Eyle Brinkhuis
Reply-To: "vpp-dev@lists.fd.io"
Date: Monday, Dece
In addition, do
1. show hardware
The bond, eth1/0, and eth2/0 should have the same mac address.
2. show lacp details
Check these statistics for the interface that is not forming the bond
Good LACP PDUs received: 13
Bad LACP PDUs received: 0
LACP PDUs sent: 14
last LACP PDU receive
Use “virsh dumpxml” to check the output to see if you have memAccess=share as
below
Steven
From: on behalf of suresh vuppala
Reply-To: "vpp-dev@lists.fd.io"
Date: Friday, October 21, 2022 at 5:23 PM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] #vpp-dev No packets generated from Vh
Your Qemu command to launch the VM is likely missing the hugepage or share
option.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22062): https://lists.fd.io/g/vpp-dev/message/22062
Mute This Topic: https://lists.fd.io/mt/94432596/21656
Mute #vpp-d
Can you provide the topology, configurations, and steps to recreate this crash?
Steven
From: on behalf of Chinmaya Aggarwal
Reply-To: "vpp-dev@lists.fd.io"
Date: Wednesday, July 13, 2022 at 4:07 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] VPP crashing if we configure srv6 policy with
Please check to make sure the interface can ping to each other prior to adding
them to the bond. Type “show lacp details” to verify VPP receives LACP PDUs
from each other and the state machine.
Steven
From: on behalf of Chinmaya Aggarwal
Reply-To: "vpp-dev@lists.fd.io"
Date: Tuesday, June 2
It is related to memoryBacking, missing hugepages, or missing shared option.
What does your qemu launch command look like?
Steven
From: on behalf of Chinmaya Aggarwal
Reply-To: "vpp-dev@lists.fd.io"
Date: Thursday, July 14, 2022 at 3:31 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Memory
Please try debug image and provide a sane back trace.
Steven
From: on behalf of Chinmaya Aggarwal
Reply-To: "vpp-dev@lists.fd.io"
Date: Thursday, July 21, 2022 at 4:42 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] VPP crashes when lcp host interface is added in network
bridge
Hi,
As per
Pragya,
UU-Flood stands for Unknown Unicast Flooding. It does not flood multicast or
broadcast packets. You need “Flooding” on to flood multicast/broadcast packets.
Steven
From: on behalf of Pragya Nand Bhagat
Reply-To: "vpp-dev@lists.fd.io"
Date: Monday, August 1, 2022 at 2:59 AM
To: "vpp-
Folks,
In case you don’t already know, there is a tag called Fixes in the commit
message which allows one to specify if the current patch fixes a regression.
See an example usage in https://gerrit.fd.io/r/c/vpp/+/34212
When you commit a patch which fixes a known regression, please make use of t
when using dpdk bonding.
Steven
From: on behalf of Srikanth Akula
Date: Thursday, September 16, 2021 at 9:48 AM
To: "Steven Luong (sluong)"
Cc: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] DPDK PMD vs native VPP bonding driver
Hi Steven,
We are trying to evaluate bonding driver
Chetan,
I have a patch in gerrit a long time ago and I just rebased it to the latest
master
https://gerrit.fd.io/r/c/vpp/+/30866
Please feel free to test it thoroughly and let me know if you encounter any
problem or not.
Steven
From: on behalf of chetan bhasin
Date: Tuesday, September 14, 2
I set up the same bonding with dot1q and subinterface configuration as given,
but using tap interface to connect to Linux instead. It works just fine. I
believe the crash was due to using a custom plugin which is cloned from VPP
DPDK plugin to handle the Octeon-tx2 SoC. When bonding gets the buf
Sudhir,
It is an error topology/configuration we don’t currently handle. Please try
this and report back
https://gerrit.fd.io/r/c/vpp/+/32292
The behavior is container-1 will form one bonding group with container-2. It is
with either BondEthernet0 or BondEthernet1.
Steven
From: on behalf of
Your commit subject line is missing a component name. The commit comment is
missing “Type:”.
Steven
From: on behalf of "hemant via lists.fd.io"
Reply-To: "hem...@mnkcg.com"
Date: Tuesday, April 27, 2021 at 12:56 PM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] lawful intercept
Newer rev
VPP implements both active and passive modes. The default operation mode is
active. The current setting for the port, active/passive, can be inferred from
the output of show lacp. In the active state column, I see act=1 for all 4
ports.
The output of the show bond command looks like VPP is alre
“make build” from the top of the workspace will generate the debug image for
you to run gdb.
Steven
From: on behalf of Yaser Azfar
Date: Wednesday, January 6, 2021 at 1:21 PM
To: "Benoit Ganne (bganne)"
Cc: "fdio+vpp-...@groups.io"
Subject: Re: [vpp-dev] VPP Packet Generator and Packet Trace
Additionally, please figure out why carrier is down. It needs to be up.
Intel 82599
carrier down
Steven
From: on behalf of Dave Barach
Date: Sunday, December 20, 2020 at 4:58 AM
To: 'Merve' , "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] Blackholed packets after forwarding interface out
show interface displays the interface’s admin state.
show hardware displays the interface’s operational link state.
The link down is likely caused by memif configuration error. Please check your
configuration on both sides to make sure they match. Some tips to debug,
show memif
set logging class m
Can you check the output of show hardware? I suspect the link is down for the
corresponding memif interface.
Steven
From: on behalf of "tahir.a.sangli...@gmail.com"
Date: Friday, December 11, 2020 at 1:14 PM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] #vpp #vpp-memif #vppcom
in our applica
Right, it should not crash. With the patch, the VM just refuses to come up
unless we raise the queue support.
Steven
On 12/9/20, 10:24 AM, "Benoit Ganne (bganne)" wrote:
> This argument in your qemu command line,
> queues=16,
> is over our current limit. We support up to 8. I can
Eyle,
This argument in your qemu command line,
queues=16,
is over our current limit. We support up to 8. I can submit an improvement
patch. But I think it will be master only.
Steven
From: Eyle Brinkhuis
Date: Wednesday, December 9, 2020 at 9:24 AM
To: "Steven Luong (sluong)"
C
Eyle,
Can you also show me the qemu command line to bring up the VM? I think it is
asking for more than 16 queues. VPP supports up to 16.
Steven
On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io on behalf of Benoit Ganne (bganne)
via lists.fd.io" wrote:
Hi Eyle, could you share the associated .
When you create the bond interface using either lacp or xor mode, there is an
option to specify load-balance l2, l23, or l34 which is equivalent to linux
xmit_hash_policy.
Steven
From: on behalf of "ashish.sax...@hsc.com"
Date: Sunday, December 6, 2020 at 3:24 AM
To: "vpp-dev@lists.fd.io"
S
Bonding cares less whether the traffic is unicast or multicast. It just hashes
the packet header and selects one of the members as the outgoing interface. The
only bonding mode which it replicates packets across all members is when you
create the bonding interface to do broadcast which you didn’
Please use gdb to provide a meaningful backtrace.
Steven
From: on behalf of Eyle Brinkhuis
Date: Wednesday, December 2, 2020 at 5:59 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Vpp crashes with core dump vhost-user interface
Hi all,
In our environment (vpp 20.05.1, ubuntu 18.04.5, networ
You have 17 format tags, but you pass 18 arguments to the unformat function. Is
that intentional?
Steven
From: on behalf of "hemant via lists.fd.io"
Reply-To: "hem...@mnkcg.com"
Date: Friday, November 27, 2020 at 3:52 PM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] unformat fails processing
It is not supported.
From: on behalf of Venkatarao M
Date: Monday, July 20, 2020 at 8:35 AM
To: "vpp-dev@lists.fd.io"
Cc: praveenkumar A S , Lokesh Chimbili
, Mahesh Sivapuram
Subject: [vpp-dev] How to do Bond interface configuration as
fail_over_mac=active in VPP
Hi all,
We are trying bon
ollet)"
Cc: "Steven Luong (sluong)" , "Dave Barach (dbarach)"
, "Kinsella, Ray" , Stephen Hemminger
, "vpp-dev@lists.fd.io" ,
"t...@lists.fd.io" , "Ed Warnicke (eaw)"
Subject: Re: [tsc] [vpp-dev] Replacing master/slave nomenclature
I am in the process of pushing a patch to replace master/slave with
aggregator/member for the bonding.
Steven
On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io on behalf of Dave Barach via
lists.fd.io"
wrote:
+1, especially since our next release will be supported for a year, and API
name chan
Inline.
From: on behalf of "sadhanakesa...@gmail.com"
Date: Thursday, July 2, 2020 at 9:55 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Userspace tcp between two vms using vhost user interface?
Hi,
there seems like lot of ways to setup userspace tcp with vpp, hoststack , with
and without
ctl show trace
Honestly, I’ve never worked with a Broadcom NIC which you are using.
tx burst function: bnxt_xmit_pkts
rx burst function: bnxt_recv_pkts
May the force be with you.
Steven
From: Manoj Iyer
Date: Monday, June 15, 2020 at 7:48 AM
To: "Dave Barach (dbarach)" , "St
Please correct the subnet mask first.
L3 10.1.1.10/24. <-- system A
inet 10.1.1.11 netmask 255.0.0.0 broadcast 10.255.255.255 <--- system B
Steven
From: on behalf of Manoj Iyer
Date: Friday, June 12, 2020 at 12:28 PM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Need help with setup.. c
Vmxnet3 is a paravirtualized device. I could be wrong, it does not appear it
supports adding virtual mac address. This error returns from dpdk indicates
just that.
Jun 8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: vrrp_vr_transition_vmac:120:
Adding virtual MAC address 00:00:5e:00:01:01 on hard
The problem is the aforementioned commit added a call to invoke
vnet_hw_interface_set_flags() in the worker thread. That is no can do. We are
in the process of reverting the commit.
Steven
On 5/29/20, 10:02 AM, "vpp-dev@lists.fd.io on behalf of Elias Rudberg"
wrote:
Hello,
We now g
First, your question has nothing to do with bonding. Whatever you are seeing is
true regardless of bonding configured or not.
Show interfaces displays the admin state of the interface. Whenever you set the
admin state to up, it is displayed as up regardless of the physical carrier is
up or down
tapcli has been deprecated few releases ago. It has been replaced by virtio
over tap. The new cli is
create tap …
Steven
From: on behalf of "mauricio.solisjr via lists.fd.io"
Reply-To: "mauricio.soli...@tno.nl"
Date: Tuesday, April 14, 2020 at 3:55 AM
To: "vpp-dev@lists.fd.io"
Subject: [vp
Dear Andrew,
I confirm that master has been rescued and reverted from “lockdown” back to
“normal”. Please proceed the “disinfection process” on 19.08 and 20.01 if you
will.
Steven
From: Andrew 👽 Yourtchenko
Date: Monday, April 6, 2020 at 8:09 AM
To: "Steven Luong (sluong)"
Cc
master
https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/19049/console
20.01
https://jenkins.fd.io/job/vpp-make-test-docs-verify-2001/61/console
Steven
From: Paul Vinciguerra
Date: Monday, April 6, 2020 at 8:35 AM
To: Paul Vinciguerra
Cc: "Steven Luong (sluong)"
Folks,
It looks like jobs for all branches, 19.08, 20.01, and master, are failing due
to this inspect.py error. Could somebody who is familiar with the issue please
take a look at it?
18:59:12 Exception occurred:
18:59:12 File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
18:59:12
Ping command has been moved to a separate plugin. You probably didn’t have the
ping plugin enable in your startup.conf. Please add the ping plugin to your
startup.conf. Something like this will do the trick.
plugins {
…
plugin ping_plugin.so { enable }
}
From: on behalf of "mauricio.solisj
From: on behalf of "ravinder.ya...@hughes.com"
Date: Tuesday, February 25, 2020 at 7:27 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error
1" #vmxnet3 #ipsec
[Edited Message Follows]
VPP IPsec responder on ESXI VM RHEL 7.6
Is there
It works for me although I am on ubuntu 1804 VM. Your statement is unclear to
me if your problem is strictly related to more than 4 rx-queues or not when you
say
“but when i try to associate more than 4 num-rx-queues i get error”
Does it work fine when you reduce the number of rx-queues less tha
So you now know what command in the dpdk section that dpdk doesn’t like.
Try adding “log-level debug” in the dpdk section of startup.conf to see if you
can find more helpful messages in “vppctl show log” from dpdk why it fails to
probe the NIC.
Steven
From: on behalf of Gencli Liu <18600640...
It is likely a resource problem – when VPP requests more descriptors and/or
TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the
interface. There are few ways to figure out what the problem is.
1. Bypass VPP and run testpmd with debug options turned on, something like
DPDK bonding is no longer supported in 19.08. However, you can use VPP native
bonding to accomplish the same thing.
create bond mode active-backup load-balance l34
set interface state BondEthernet0 up
bond add BondEthernet0 GigabitEthernet1/0/0
bond add BondEthernet0 GigabitEthernet1/0/1
Steven
Using VPP+DPDK virtio to connect with VPP + vhost-user is not actively
maintained. I got it working couple years ago by committing some changes to the
DPDK virtio code. Since then, I’ve not been playing with it anymore. Breakage
is possible. I could spend a whole week on it to get it working aga
create interface virtio
Or just use memif interface. That is what it is built for.
Steven
From: on behalf of "mojtaba.eshghi"
Date: Monday, July 29, 2019 at 5:50 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside
host using vhost-vi
The debug CLI was replaced by
set logging class vhost-user level debug
Use show log to view the messages.
Did you configure 1GB huge on the container? It used to be that dpdk virtio
requires 1GB huge page. Not sure if it is still the case nowadays. If you use
VPP 19.04 or later, you could try VP
Packet drops due to “no available descriptors” for vhost-user interface is
extremely likely when doing performance test with qemu’s default vring queue
size. You need to specify the vring queue size of 1024, default is 256, when
you bring up the VM. The queue size can be specified either via XML
Yes on both counts.
From: on behalf of Zhiyong Yang
Date: Wednesday, June 12, 2019 at 10:33 PM
To: "Yang, Zhiyong" , "Steven Luong (sluong)"
, "vpp-dev@lists.fd.io" , "Carter,
Thomas N"
Cc: "Kinsella, Ray"
Subject: Re: [vpp-dev] some ques
dev@lists.fd.io" , "Steven Luong (sluong)"
, "Carter, Thomas N"
Cc: "Kinsella, Ray"
Subject: some questions about LACP(link bonding mode 4)
Hi Steven and VPP guys,
I’m studying the lacp implementation. and want to know if it is
possible that Numa is consid
Clueless with useless tracebacks. Please hook up gdb and get the complete
human-readable backtrace.
Steven
From: on behalf of Mostafa Salari
Date: Wednesday, May 29, 2019 at 10:24 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting
address
Mostafa,
Vmxnet3 NICs are in the blacklist by default. Please specify the vmxnet3 pci’s
in the dpdk section of the startup.conf.
Steven
From: on behalf of Mostafa Salari
Date: Sunday, May 12, 2019 at 4:52 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] Vpp 1904 does not recognize vmxnet3 int
Dear Anthony,
Please check the bond interface to see if the active slaves count has any
positive number using show bond. Since you didn’t configure LACP on VM2, I
believe you’ve not gotten any active slave in VPP. Your solution is to
configure a bond interface in VM2 using mode 4 (I believe) if
Anthony,
L3 address should be configured on the bond interface, not the slave interface.
If there is a switch in between VPP’s physical NICs and the VM, the switch
should be configured to do the bonding, not the remote VM. Use show bond to
check the bundle is created successfully between VPP an
0:00:00/110.0.0.1
00:00:00:00: error-drop
arp-input: Interface is not IP enabled
Steven
From: on behalf of "saint_sun 孙 via Lists.Fd.Io"
Reply-To: "saint_...@aliyun.com"
Date: Thursday, October 25, 2018 at 12:25 AM
To: "Steven Luong (sluong)"
Cc: &qu
Are you using VPP native bonding driver or DPDK bonding driver? How do you
configure the bonding interface? Please include the configuration and process
to recreate the problem.
Steven
From: on behalf of "saint_sun 孙 via Lists.Fd.Io"
Reply-To: "saint_...@aliyun.com"
Date: Wednesday, October
DPDK is expecting UPT version > 0 and ESXi 6.5/6.7 seems to be returning UPT
version 0, when it was queried, which is not a supported version. I am using
ESXi 6.0 and it is working fine. You could try ESXi 6.0 to see if it helps.
Steven
From: on behalf of truring truring
Date: Monday, October
Stephen,
Fix for vhost
https://gerrit.fd.io/r/14920
I'll take care of vmxnet3 later.
Steven
On 9/20/18, 10:57 AM, "vpp-dev@lists.fd.io on behalf of Stephen Hemminger"
wrote:
Why is there not a simple link on FD.io developer web page to report bugs.
Reporting bugs page talks abo
Chandra,
Would you mind sharing what you found? You’ve piqued my curiosity.
Steven
From: "Chandra Mohan, Vijay Mohan"
Date: Thursday, August 30, 2018 at 10:18 AM
To: "Yichen Wang (yicwang)" , "Steven Luong (sluong)"
, "vpp-dev@lists.fd.io"
Subject: R
Aleksander,
I found the CLI bug. You can easily workaround with it. Please set the physical
interface state up first in your CLI sequence and it will work.
create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface
Aleksander,
This problem should be easy to figure out if you can gdb the code. When the
very first slave interface is added to the bonding group via the command “bond
add BondEthernet0 GigabitEthnerneta/0/0/1”,
- The PTX machine schedules the interface with the periodic timer via
lacp_schedule
This configuration is not supported in VPP.
Steven
From: on behalf of Aleksander Djuric
Date: Wednesday, August 15, 2018 at 12:33 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] LACP link bonding issue
In addition.. I have tried to configure LACP in dpdk section of vpp
startup.conf.. an
Aleksander,
The problem is LACP periodic timer is not running as shown in your output. I
wonder if lacp-process is launched properly or got stuck. Could you please do
show run and check on the health of lacp-process?
periodic timer: not running
Steven
From: on behalf of Aleksander Djuri
I forgot to ask if these 2 boxes’ interfaces are connected back to back or
through a switch.
Steven
From: on behalf of "steven luong via Lists.Fd.Io"
Reply-To: "Steven Luong (sluong)"
Date: Tuesday, August 14, 2018 at 8:24 AM
To: Aleksander Djuric , "vpp-dev@li
Aleksander
It looks like the LACP packets are not going out to the interfaces as expected
or being dropped. Additional output and trace are needed to determine why.
Please collect the following from both sides.
clear hardware
clear error
wait a few seconds
show hardware
show error
show lacp d
Vijay,
From the show output, I can’t really tell what your problem is. If you could
provide additional information about your environment, I could try setting it
up and see what’s wrong. Things I need from you are exact VPP version, VPP
configuration, qemu startup command line or the XML startu
>>> argv=)
>>> at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/unix/main.c:632
>>> #19 0x0019 in ?? ()
>>> #20 0x00e7 in ?? ()
>>> #21 0x0831 in ?? ()
>>> #22 0x7fb08e9aac00 in ?? () fr
> Jun 5 19:23:35 [18916]: vhost_user_socket_read:1037: if 3 msg
> VHOST_USER_SET_VRING_KICK 0
> Jun 5 19:23:35 [18916]: vhost_user_socket_read:932: if 3 msg
> VHOST_USER_SET_VRING_NUM idx 1 num 256
> Jun 5 19:23:35 [18916]: vhost_user_socket_read:1096: if 3 msg
> VHOST_US
rtup.conf
root@867dc128b544:~/dpdk#
show version (on both host and container).
vpp v18.04-rc2~26-gac2b736~b45 built by root on 34a554d1c194 at Wed
Apr 25 14:53:07 UTC 2018
vpp#
Thanks.
On Tue, Jun 5, 2018 at 9:23 AM, Steven Luong (sluong)
wrote:
> Ravi,
>
th=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02
but it doesn't work.
Thanks.
On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong)
wrote:
> Ravi,
>
> VPP only supports vhost-user in the device mode. In your example, the
host, in device mode,
.1.2/24
vpp#
vpp# ping 192.168.1.1
Statistics: 5 sent, 0 received, 100% packet loss
vpp#
On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong)
wrote:
> show interface and look for the counter and count columns for the
corresponding interface.
>
pdk vhost interfaces). I have one
question is there a way to read vhost-user statistics counter (Rx/Tx)
on vpp? I only know
'show vhost-user ' and 'show vhost-user descriptors'
which doesn't show any counters.
Thanks.
On Thu, M
irtio-user
(in a container) -- VPP crashes and in DPDK
Thanks.
On Thu, May 31, 2018 at 10:12 AM, Steven Luong (sluong)
wrote:
> Ravi,
>
> I've proved my point -- there is a problem in the way that you invoke
testpmd. The shared memory region t
entries = *((volatile uint16_t *)&vq->avail->idx) -
(gdb) p vq
$1 = (struct vhost_virtqueue *) 0x7fc3ffc84b00
(gdb) p vq->avail
$2 = (struct vring_avail *) 0x7ffbfff98000
(gdb) p *$2
Cannot access memory at address 0x7ffbfff98000
(gdb)
Thanks.
VhostEthernet interface,
try to send some traffic through it to see if it crashes or not.
Steven
On 5/30/18, 9:17 PM, "vpp-dev@lists.fd.io on behalf of Steven Luong (sluong)"
wrote:
Ravi,
I don't think you can declare (2) works fine yet. Please bring up the dpdk
vhost-
local00down
since it is a static mapping I am assuming it should be created, correct?
Thanks.
On Wed, May 30, 2018 at 3:43 PM, Steven Luong (sluong)
wrote:
> Ravi,
>
> First and foremost, get rid of the feature-mask opt
ersion ArchitectureDescription
>
+++--===-===-==
> ii corekeeper 1.6 amd64 enable core
> files and report crashes to the system
>
>
Ravi,
I install corekeeper and the core file is kept in /var/crash. But why not use
gdb to attach to the VPP process?
To turn on VPP vhost-user debug, type "debug vhost-user on" at the VPP prompt.
Steven
On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur"
wrote:
Hi Marco
Yuliang,
Queue size is controlled by the driver, not the device which VPP is running as.
When you launch the VM via Qemu, at the end of virtio-net-pci, specify
rx_queue_size=xxx and/or tx_queue_size=xxx, where xxx is 512 or 1024.
You need Qemu-2.8.0 to specify rx_queue_size. You need to get Qem
Aris,
From the output of show vhost, the memory regions is 0. The connection between
VPP vhost-user and QEMU virtio_net driver does not have a happy ending.
Memory regions (total 0)
Prior to launching the VM, turn on debug using "debug vhost-user on" in VPP to
see more details on the mess
Aris,
There is not enough information here. What is VPP's vhost-user interface
connected to? A VM launched by QEMU or a docker container running VPP with DPDK
virtio_net driver? What do you see in the output from show vhost?
Steven
On 5/9/18, 1:13 PM, "vpp-dev@lists.fd.io on behalf of
arisle
Avi,
Yes, you can. As an example, I have it like this in my startup.conf
dpdk {
vdev virtio_user0,path=/tmp/sock0
}
Steven
On 3/28/18, 8:09 AM, "vpp-dev@lists.fd.io on behalf of Avi Cohen (A)"
wrote:
Hi
In the startup.conf , in the dpdk part we can add pci devices to the
white-li
m-prealloc \
-debugcon file:debug.log -global isa-debugcon.iobase=0x402
-Sara
On Thu, Mar 22, 2018 at 6:10 PM, steven luong
mailto:slu...@cisco.com>> wrote:
Sara,
Iperf3 is not blasting traffic fast enough. You could try specifying multiple
parallel streams using -P. Then, you w
Sara,
Iperf3 is not blasting traffic fast enough. You could try specifying multiple
parallel streams using -P. Then, you will likely encounter vhost-user dropping
packets as you are using the default virtqueue size 256. You’ll need to specify
rx_queue_size=1024 when you launch the VM in the qem
Sara,
Could you please try again with adding below config to startup.conf? I remember
some drivers don’t blast traffic at full speed unless it is getting a kick for
each packet sent. I am not sure if you are using one of those.
vhost-user {
coalesce-frame 0
}
Steven
From: on behalf of Sara
Wuxp,
The first thing is to validate your claim that nothing sent from the VM when
VPP is restarted. Start tcpdump and watch nothing is sent from the VM. I assume
it happens all the times as supposed to once in a blue moon.
If that is validated, I’d like to see your qemu startup command to see
Hongjun,
VPP only implements the device side for vhost user. You need the driver side.
It looks like you are trying to run vhost user in the host as a device and
vhost user in the container also as a device. They don’t work that way. Both
qemu and dpdk implement driver side that you can use to
Yes, I started adding LACP in VPP native mode, not tied to DPDK. So you can
actually enslave additional NICs using CLI without restarting VPP.
Steven
From: on behalf of "yug...@telincn.com"
Date: Wednesday, January 3, 2018 at 7:41 PM
To: "John Lo (loj)" , otroan
Cc: vpp-dev
Subject: Re: [vp
Avi,
You can tune a number of things to get higher throughput
- Using VPP worker threads
- Up the queue size to 1024 from the default 256 (done via qemu launch)
- Enable multi-queues (done via qemu launch)
- Up the number of streams when invoking iperf3 using –P
Nevertheless, 10s Mbps is unusual
Jozel,
Here is the patch. Please help verifying it.
https://gerrit.fd.io/r/#/c/9094/
Steven
From: on behalf of "Steven Luong (sluong)"
Date: Friday, October 27, 2017 at 9:00 AM
To: Jozef Glončák , "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] VPP API for interface rx
Jan,
You have to use ulimit –n to raise the number of open files if you are going to
create a lot of interfaces.
Steven
From: on behalf of "Jan Srnicek -X (jsrnicek -
PANTHEON TECHNOLOGIES at Cisco)"
Date: Friday, October 27, 2017 at 3:31 AM
To: vpp-dev
Subject: [vpp-dev] Number of interfac
Jozef,
Sure. I’ll take care of this.
Steven
From: on behalf of Jozef Glončák
Date: Friday, October 27, 2017 at 3:01 AM
To: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] VPP API for interface rx-mode polling
Hello VPP gurus,
I need VPP API for this command
set interface rx-mode polling.
I
, "Steven Luong (sluong)"
, Guo Ruijing , "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK
Hi Steven,
we are also looking at using dpdk based virtualethernet interface, so looking
back at this thread i was wondering what
stable/1710 to see if it helps because that is what I
use. I am afraid that I don’t have any more tricks in my sleeve to offer useful
help. Sorry about that.
Steven
From: "Saxena, Nitin"
Date: Thursday, September 28, 2017 at 6:56 PM
To: "Steven Luong (sluong)"
Cc: "
1 - 100 of 121 matches
Mail list logo