Re: [vpp-dev] Vhost-user interface not working

2023-03-02 Thread steven luong via lists.fd.io
It is likely that you are missing memAccess=’shared’ https://fdio-vpp.readthedocs.io/en/latest/usecases/vhost/xmlexample.html#:~:text=%3Ccell%20id%3D%270%27%20cpus%3D%270%27%20memory%3D%27262144%27%20unit%3D%27KiB%27%20memAccess%3D%27shared%27/%3E From: on behalf of Benjamin Vandendriessche Re

Re: [vpp-dev] VPP Policer API Memory Leak

2023-02-21 Thread steven luong via lists.fd.io
I bet you didn’t limit the number of API trace entries. Try limit the number of API trace entries that VPP keeps with nitems and give it a reasonable number. api-trace { on nitems 65535 } Steven From: on behalf of "efimochki...@

Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down

2023-02-17 Thread steven luong via lists.fd.io
Sunil is using dpdk vmxnet3 driver. So he doesn’t need to load VPP native vmxnet3 plugin. Gdb dpdk code to see why it returns -22 when VPP adds the NIC to dpdk. rte_eth_dev_start[port:1, errno:-22]: Unknown error -22 Steven From: on behalf of Guangming Reply-To: "vpp-dev@lists.fd.io" Da

Re: [vpp-dev] VPP logging does not logs API calls debug message

2023-02-04 Thread steven luong via lists.fd.io
Did you try vppctl show log Steven From: on behalf of "Tripathi, VinayX" Reply-To: "vpp-dev@lists.fd.io" Date: Saturday, February 4, 2023 at 4:19 AM To: "vpp-dev@lists.fd.io" Cc: "Ji, Kai" Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message Hi Team , Any suggestion wou

Re: [vpp-dev] LACP issues w/ cdma/connectX 6

2022-12-05 Thread steven luong via lists.fd.io
Type show lacp details to see if the member interface that is not forming the bundle receives and sends LACP PDUs. Type show hardware to see if both member interfaces have the same mac address. From: on behalf of Eyle Brinkhuis Reply-To: "vpp-dev@lists.fd.io" Date: Monday, Dece

Re: [vpp-dev] LACP bonding not working with RDMA driver

2022-11-15 Thread steven luong via lists.fd.io
In addition, do 1. show hardware The bond, eth1/0, and eth2/0 should have the same mac address. 2. show lacp details Check these statistics for the interface that is not forming the bond Good LACP PDUs received: 13 Bad LACP PDUs received: 0 LACP PDUs sent: 14 last LACP PDU receive

Re: [vpp-dev] #vpp-dev No packets generated from Vhost user interface

2022-10-24 Thread steven luong via lists.fd.io
Use “virsh dumpxml” to check the output to see if you have memAccess=share as below Steven From: on behalf of suresh vuppala Reply-To: "vpp-dev@lists.fd.io" Date: Friday, October 21, 2022 at 5:23 PM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] #vpp-dev No packets generated from Vh

Re: [vpp-dev] #vpp-dev No packets generated from Vhost user interface

2022-10-21 Thread steven luong via lists.fd.io
Your Qemu command to launch the VM is likely missing the hugepage or share option. -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#22062): https://lists.fd.io/g/vpp-dev/message/22062 Mute This Topic: https://lists.fd.io/mt/94432596/21656 Mute #vpp-d

Re: [vpp-dev] VPP crashing if we configure srv6 policy with five sids in the sidlist

2022-08-05 Thread steven luong via lists.fd.io
Can you provide the topology, configurations, and steps to recreate this crash? Steven From: on behalf of Chinmaya Aggarwal Reply-To: "vpp-dev@lists.fd.io" Date: Wednesday, July 13, 2022 at 4:07 AM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] VPP crashing if we configure srv6 policy with

Re: [vpp-dev] LACP bond interface not working

2022-08-04 Thread steven luong via lists.fd.io
Please check to make sure the interface can ping to each other prior to adding them to the bond. Type “show lacp details” to verify VPP receives LACP PDUs from each other and the state machine. Steven From: on behalf of Chinmaya Aggarwal Reply-To: "vpp-dev@lists.fd.io" Date: Tuesday, June 2

Re: [vpp-dev] Memory region shows empty for vhost interface

2022-08-04 Thread steven luong via lists.fd.io
It is related to memoryBacking, missing hugepages, or missing shared option. What does your qemu launch command look like? Steven From: on behalf of Chinmaya Aggarwal Reply-To: "vpp-dev@lists.fd.io" Date: Thursday, July 14, 2022 at 3:31 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Memory

Re: [vpp-dev] VPP crashes when lcp host interface is added in network bridge

2022-08-04 Thread steven luong via lists.fd.io
Please try debug image and provide a sane back trace. Steven From: on behalf of Chinmaya Aggarwal Reply-To: "vpp-dev@lists.fd.io" Date: Thursday, July 21, 2022 at 4:42 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] VPP crashes when lcp host interface is added in network bridge Hi, As per

Re: [vpp-dev] Bridge-domain function and usage.

2022-08-01 Thread steven luong via lists.fd.io
Pragya, UU-Flood stands for Unknown Unicast Flooding. It does not flood multicast or broadcast packets. You need “Flooding” on to flood multicast/broadcast packets. Steven From: on behalf of Pragya Nand Bhagat Reply-To: "vpp-dev@lists.fd.io" Date: Monday, August 1, 2022 at 2:59 AM To: "vpp-

[vpp-dev] Please include Fixes: tag for regression fix

2021-11-02 Thread steven luong via lists.fd.io
Folks, In case you don’t already know, there is a tag called Fixes in the commit message which allows one to specify if the current patch fixes a regression. See an example usage in https://gerrit.fd.io/r/c/vpp/+/34212 When you commit a patch which fixes a known regression, please make use of t

Re: [vpp-dev] DPDK PMD vs native VPP bonding driver

2021-09-16 Thread steven luong via lists.fd.io
when using dpdk bonding. Steven From: on behalf of Srikanth Akula Date: Thursday, September 16, 2021 at 9:48 AM To: "Steven Luong (sluong)" Cc: "vpp-dev@lists.fd.io" Subject: [vpp-dev] DPDK PMD vs native VPP bonding driver Hi Steven, We are trying to evaluate bonding driver

Re: [vpp-dev] fail_over_mac=1 (Active) Bonding

2021-09-14 Thread steven luong via lists.fd.io
Chetan, I have a patch in gerrit a long time ago and I just rebased it to the latest master https://gerrit.fd.io/r/c/vpp/+/30866 Please feel free to test it thoroughly and let me know if you encounter any problem or not. Steven From: on behalf of chetan bhasin Date: Tuesday, September 14, 2

Re: [vpp-dev] vnet bonding crashes - need some suggestions to narrow down

2021-05-22 Thread steven luong via lists.fd.io
I set up the same bonding with dot1q and subinterface configuration as given, but using tap interface to connect to Linux instead. It works just fine. I believe the crash was due to using a custom plugin which is cloned from VPP DPDK plugin to handle the Octeon-tx2 SoC. When bonding gets the buf

Re: [vpp-dev] observing issue with LACP port selection logic

2021-05-12 Thread steven luong via lists.fd.io
Sudhir, It is an error topology/configuration we don’t currently handle. Please try this and report back https://gerrit.fd.io/r/c/vpp/+/32292 The behavior is container-1 will form one bonding group with container-2. It is with either BondEthernet0 or BondEthernet1. Steven From: on behalf of

Re: [vpp-dev] lawful intercept

2021-04-27 Thread steven luong via lists.fd.io
Your commit subject line is missing a component name. The commit comment is missing “Type:”. Steven From: on behalf of "hemant via lists.fd.io" Reply-To: "hem...@mnkcg.com" Date: Tuesday, April 27, 2021 at 12:56 PM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] lawful intercept Newer rev

Re: [vpp-dev] LACP Troubleshooting

2021-02-16 Thread steven luong via lists.fd.io
VPP implements both active and passive modes. The default operation mode is active. The current setting for the port, active/passive, can be inferred from the output of show lacp. In the active state column, I see act=1 for all 4 ports. The output of the show bond command looks like VPP is alre

Re: [vpp-dev] VPP Packet Generator and Packet Tracer

2021-01-06 Thread steven luong via lists.fd.io
“make build” from the top of the workspace will generate the debug image for you to run gdb. Steven From: on behalf of Yaser Azfar Date: Wednesday, January 6, 2021 at 1:21 PM To: "Benoit Ganne (bganne)" Cc: "fdio+vpp-...@groups.io" Subject: Re: [vpp-dev] VPP Packet Generator and Packet Trace

Re: [vpp-dev] Blackholed packets after forwarding interface output

2020-12-20 Thread steven luong via lists.fd.io
Additionally, please figure out why carrier is down. It needs to be up. Intel 82599 carrier down Steven From: on behalf of Dave Barach Date: Sunday, December 20, 2020 at 4:58 AM To: 'Merve' , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] Blackholed packets after forwarding interface out

Re: [vpp-dev] Multicast packets sent via memif when rule says to forward through another interface

2020-12-17 Thread steven luong via lists.fd.io
show interface displays the interface’s admin state. show hardware displays the interface’s operational link state. The link down is likely caused by memif configuration error. Please check your configuration on both sides to make sure they match. Some tips to debug, show memif set logging class m

Re: [vpp-dev] #vpp #vpp-memif #vppcom

2020-12-11 Thread steven luong via lists.fd.io
Can you check the output of show hardware? I suspect the link is down for the corresponding memif interface. Steven From: on behalf of "tahir.a.sangli...@gmail.com" Date: Friday, December 11, 2020 at 1:14 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] #vpp #vpp-memif #vppcom in our applica

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Right, it should not crash. With the patch, the VM just refuses to come up unless we raise the queue support. Steven On 12/9/20, 10:24 AM, "Benoit Ganne (bganne)" wrote: > This argument in your qemu command line, > queues=16, > is over our current limit. We support up to 8. I can

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Eyle, This argument in your qemu command line, queues=16, is over our current limit. We support up to 8. I can submit an improvement patch. But I think it will be master only. Steven From: Eyle Brinkhuis Date: Wednesday, December 9, 2020 at 9:24 AM To: "Steven Luong (sluong)" C

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Eyle, Can you also show me the qemu command line to bring up the VM? I think it is asking for more than 16 queues. VPP supports up to 16. Steven On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io on behalf of Benoit Ganne (bganne) via lists.fd.io" wrote: Hi Eyle, could you share the associated .

Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

2020-12-06 Thread steven luong via lists.fd.io
When you create the bond interface using either lacp or xor mode, there is an option to specify load-balance l2, l23, or l34 which is equivalent to linux xmit_hash_policy. Steven From: on behalf of "ashish.sax...@hsc.com" Date: Sunday, December 6, 2020 at 3:24 AM To: "vpp-dev@lists.fd.io" S

Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

2020-12-02 Thread steven luong via lists.fd.io
Bonding cares less whether the traffic is unicast or multicast. It just hashes the packet header and selects one of the members as the outgoing interface. The only bonding mode which it replicates packets across all members is when you create the bonding interface to do broadcast which you didn’

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-02 Thread steven luong via lists.fd.io
Please use gdb to provide a meaningful backtrace. Steven From: on behalf of Eyle Brinkhuis Date: Wednesday, December 2, 2020 at 5:59 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Vpp crashes with core dump vhost-user interface Hi all, In our environment (vpp 20.05.1, ubuntu 18.04.5, networ

Re: [vpp-dev] unformat fails processing > 3 variables

2020-11-27 Thread steven luong via lists.fd.io
You have 17 format tags, but you pass 18 arguments to the unformat function. Is that intentional? Steven From: on behalf of "hemant via lists.fd.io" Reply-To: "hem...@mnkcg.com" Date: Friday, November 27, 2020 at 3:52 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] unformat fails processing

Re: [vpp-dev] How to do Bond interface configuration as fail_over_mac=active in VPP

2020-07-20 Thread steven luong via lists.fd.io
It is not supported. From: on behalf of Venkatarao M Date: Monday, July 20, 2020 at 8:35 AM To: "vpp-dev@lists.fd.io" Cc: praveenkumar A S , Lokesh Chimbili , Mahesh Sivapuram Subject: [vpp-dev] How to do Bond interface configuration as fail_over_mac=active in VPP Hi all, We are trying bon

Re: [tsc] [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread steven luong via lists.fd.io
ollet)" Cc: "Steven Luong (sluong)" , "Dave Barach (dbarach)" , "Kinsella, Ray" , Stephen Hemminger , "vpp-dev@lists.fd.io" , "t...@lists.fd.io" , "Ed Warnicke (eaw)" Subject: Re: [tsc] [vpp-dev] Replacing master/slave nomenclature

Re: [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread steven luong via lists.fd.io
I am in the process of pushing a patch to replace master/slave with aggregator/member for the bonding. Steven On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io on behalf of Dave Barach via lists.fd.io" wrote: +1, especially since our next release will be supported for a year, and API name chan

Re: [vpp-dev] Userspace tcp between two vms using vhost user interface?

2020-07-02 Thread steven luong via lists.fd.io
Inline. From: on behalf of "sadhanakesa...@gmail.com" Date: Thursday, July 2, 2020 at 9:55 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Userspace tcp between two vms using vhost user interface? Hi, there seems like lot of ways to setup userspace tcp with vpp, hoststack , with and without

Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-15 Thread steven luong via lists.fd.io
ctl show trace Honestly, I’ve never worked with a Broadcom NIC which you are using. tx burst function: bnxt_xmit_pkts rx burst function: bnxt_recv_pkts May the force be with you. Steven From: Manoj Iyer Date: Monday, June 15, 2020 at 7:48 AM To: "Dave Barach (dbarach)" , "St

Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-12 Thread steven luong via lists.fd.io
Please correct the subnet mask first. L3 10.1.1.10/24. <-- system A inet 10.1.1.11 netmask 255.0.0.0 broadcast 10.255.255.255 <--- system B Steven From: on behalf of Manoj Iyer Date: Friday, June 12, 2020 at 12:28 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Need help with setup.. c

Re: [vpp-dev] Unable to ping vpp interface from outside after configuring vrrp on vpp interface and making it as Master

2020-06-08 Thread steven luong via lists.fd.io
Vmxnet3 is a paravirtualized device. I could be wrong, it does not appear it supports adding virtual mac address. This error returns from dpdk indicates just that. Jun 8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: vrrp_vr_transition_vmac:120: Adding virtual MAC address 00:00:5e:00:01:01 on hard

Re: [vpp-dev] worker thread deadlock for current master branch, started with commit "bonding: adjust link state based on active slaves"

2020-05-29 Thread steven luong via lists.fd.io
The problem is the aforementioned commit added a call to invoke vnet_hw_interface_set_flags() in the worker thread. That is no can do. We are in the process of reverting the commit. Steven On 5/29/20, 10:02 AM, "vpp-dev@lists.fd.io on behalf of Elias Rudberg" wrote: Hello, We now g

Re: [vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-20 Thread steven luong via lists.fd.io
First, your question has nothing to do with bonding. Whatever you are seeing is true regardless of bonding configured or not. Show interfaces displays the admin state of the interface. Whenever you set the admin state to up, it is displayed as up regardless of the physical carrier is up or down

Re: [vpp-dev] Unknown input `tap connect` #vpp

2020-04-14 Thread steven luong via lists.fd.io
tapcli has been deprecated few releases ago. It has been replaced by virtio over tap. The new cli is create tap … Steven From: on behalf of "mauricio.solisjr via lists.fd.io" Reply-To: "mauricio.soli...@tno.nl" Date: Tuesday, April 14, 2020 at 3:55 AM To: "vpp-dev@lists.fd.io" Subject: [vp

Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Dear Andrew, I confirm that master has been rescued and reverted from “lockdown” back to “normal”. Please proceed the “disinfection process” on 19.08 and 20.01 if you will. Steven From: Andrew 👽 Yourtchenko Date: Monday, April 6, 2020 at 8:09 AM To: "Steven Luong (sluong)" Cc

Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
master https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/19049/console 20.01 https://jenkins.fd.io/job/vpp-make-test-docs-verify-2001/61/console Steven From: Paul Vinciguerra Date: Monday, April 6, 2020 at 8:35 AM To: Paul Vinciguerra Cc: "Steven Luong (sluong)"

[vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Folks, It looks like jobs for all branches, 19.08, 20.01, and master, are failing due to this inspect.py error. Could somebody who is familiar with the issue please take a look at it? 18:59:12 Exception occurred: 18:59:12 File "/usr/lib/python3.6/inspect.py", line 516, in unwrap 18:59:12

Re: [vpp-dev] Unknown input `ping' #vpp

2020-03-26 Thread steven luong via Lists.Fd.Io
Ping command has been moved to a separate plugin. You probably didn’t have the ping plugin enable in your startup.conf. Please add the ping plugin to your startup.conf. Something like this will do the trick. plugins { … plugin ping_plugin.so { enable } } From: on behalf of "mauricio.solisj

Re: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec

2020-02-25 Thread steven luong via Lists.Fd.Io
From: on behalf of "ravinder.ya...@hughes.com" Date: Tuesday, February 25, 2020 at 7:27 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec [Edited Message Follows] VPP IPsec responder on ESXI VM RHEL 7.6 Is there

Re: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec

2020-02-24 Thread steven luong via Lists.Fd.Io
It works for me although I am on ubuntu 1804 VM. Your statement is unclear to me if your problem is strictly related to more than 4 rx-queues or not when you say “but when i try to associate more than 4 num-rx-queues i get error” Does it work fine when you reduce the number of rx-queues less tha

Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-07 Thread steven luong via Lists.Fd.Io
So you now know what command in the dpdk section that dpdk doesn’t like. Try adding “log-level debug” in the dpdk section of startup.conf to see if you can find more helpful messages in “vppctl show log” from dpdk why it fails to probe the NIC. Steven From: on behalf of Gencli Liu <18600640...

Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-06 Thread steven luong via Lists.Fd.Io
It is likely a resource problem – when VPP requests more descriptors and/or TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the interface. There are few ways to figure out what the problem is. 1. Bypass VPP and run testpmd with debug options turned on, something like

Re: [vpp-dev] #vpp #bond How to config bond mode in vpp?

2020-01-03 Thread steven luong via Lists.Fd.Io
DPDK bonding is no longer supported in 19.08. However, you can use VPP native bonding to accomplish the same thing. create bond mode active-backup load-balance l34 set interface state BondEthernet0 up bond add BondEthernet0 GigabitEthernet1/0/0 bond add BondEthernet0 GigabitEthernet1/0/1 Steven

Re: [vpp-dev] VPP with DPDK vhost ---- VPP with DPDK virtio

2019-08-12 Thread steven luong via Lists.Fd.Io
Using VPP+DPDK virtio to connect with VPP + vhost-user is not actively maintained. I got it working couple years ago by committing some changes to the DPDK virtio code. Since then, I’ve not been playing with it anymore. Breakage is possible. I could spend a whole week on it to get it working aga

Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-virtio-user interfaces

2019-07-29 Thread steven luong via Lists.Fd.Io
create interface virtio Or just use memif interface. That is what it is built for. Steven From: on behalf of "mojtaba.eshghi" Date: Monday, July 29, 2019 at 5:50 AM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-vi

Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-virtio-user interfaces

2019-07-28 Thread steven luong via Lists.Fd.Io
The debug CLI was replaced by set logging class vhost-user level debug Use show log to view the messages. Did you configure 1GB huge on the container? It used to be that dpdk virtio requires 1GB huge page. Not sure if it is still the case nowadays. If you use VPP 19.04 or later, you could try VP

Re: [vpp-dev] Many "tx packet drops (no available descriptors)" #vpp

2019-07-11 Thread steven luong via Lists.Fd.Io
Packet drops due to “no available descriptors” for vhost-user interface is extremely likely when doing performance test with qemu’s default vring queue size. You need to specify the vring queue size of 1024, default is 256, when you bring up the VM. The queue size can be specified either via XML

Re: [vpp-dev] some questions about LACP(link bonding mode 4)

2019-06-13 Thread steven luong via Lists.Fd.Io
Yes on both counts. From: on behalf of Zhiyong Yang Date: Wednesday, June 12, 2019 at 10:33 PM To: "Yang, Zhiyong" , "Steven Luong (sluong)" , "vpp-dev@lists.fd.io" , "Carter, Thomas N" Cc: "Kinsella, Ray" Subject: Re: [vpp-dev] some ques

Re: [vpp-dev] some questions about LACP(link bonding mode 4)

2019-06-12 Thread steven luong via Lists.Fd.Io
dev@lists.fd.io" , "Steven Luong (sluong)" , "Carter, Thomas N" Cc: "Kinsella, Ray" Subject: some questions about LACP(link bonding mode 4) Hi Steven and VPP guys, I’m studying the lacp implementation. and want to know if it is possible that Numa is consid

Re: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting address 0x0

2019-05-29 Thread steven luong via Lists.Fd.Io
Clueless with useless tracebacks. Please hook up gdb and get the complete human-readable backtrace. Steven From: on behalf of Mostafa Salari Date: Wednesday, May 29, 2019 at 10:24 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting address

Re: [vpp-dev] Vpp 1904 does not recognize vmxnet3 interfaces

2019-05-12 Thread steven luong via Lists.Fd.Io
Mostafa, Vmxnet3 NICs are in the blacklist by default. Please specify the vmxnet3 pci’s in the dpdk section of the startup.conf. Steven From: on behalf of Mostafa Salari Date: Sunday, May 12, 2019 at 4:52 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Vpp 1904 does not recognize vmxnet3 int

Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

2019-02-23 Thread steven luong via Lists.Fd.Io
Dear Anthony, Please check the bond interface to see if the active slaves count has any positive number using show bond. Since you didn’t configure LACP on VM2, I believe you’ve not gotten any active slave in VPP. Your solution is to configure a bond interface in VM2 using mode 4 (I believe) if

Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

2019-02-19 Thread steven luong via Lists.Fd.Io
Anthony, L3 address should be configured on the bond interface, not the slave interface. If there is a switch in between VPP’s physical NICs and the VM, the switch should be configured to do the bonding, not the remote VM. Use show bond to check the bundle is created successfully between VPP an

Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

2018-10-25 Thread steven luong via Lists.Fd.Io
0:00:00/110.0.0.1 00:00:00:00: error-drop arp-input: Interface is not IP enabled Steven From: on behalf of "saint_sun 孙 via Lists.Fd.Io" Reply-To: "saint_...@aliyun.com" Date: Thursday, October 25, 2018 at 12:25 AM To: "Steven Luong (sluong)" Cc: &qu

Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

2018-10-24 Thread steven luong via Lists.Fd.Io
Are you using VPP native bonding driver or DPDK bonding driver? How do you configure the bonding interface? Please include the configuration and process to recreate the problem. Steven From: on behalf of "saint_sun 孙 via Lists.Fd.Io" Reply-To: "saint_...@aliyun.com" Date: Wednesday, October

Re: [vpp-dev] "Incompatible UPT version” error when running VPP v18.01 with DPDK v17.11 on VMWare with VMXNET3 interface ,ESXI Version 6.5/6.7

2018-10-01 Thread steven luong via Lists.Fd.Io
DPDK is expecting UPT version > 0 and ESXi 6.5/6.7 seems to be returning UPT version 0, when it was queried, which is not a supported version. I am using ESXi 6.0 and it is working fine. You could try ESXi 6.0 to see if it helps. Steven From: on behalf of truring truring Date: Monday, October

Re: [vpp-dev] [BUG] vhost-user display bug

2018-09-20 Thread steven luong via Lists.Fd.Io
Stephen, Fix for vhost https://gerrit.fd.io/r/14920 I'll take care of vmxnet3 later. Steven On 9/20/18, 10:57 AM, "vpp-dev@lists.fd.io on behalf of Stephen Hemminger" wrote: Why is there not a simple link on FD.io developer web page to report bugs. Reporting bugs page talks abo

Re: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface

2018-08-30 Thread steven luong via Lists.Fd.Io
Chandra, Would you mind sharing what you found? You’ve piqued my curiosity. Steven From: "Chandra Mohan, Vijay Mohan" Date: Thursday, August 30, 2018 at 10:18 AM To: "Yichen Wang (yicwang)" , "Steven Luong (sluong)" , "vpp-dev@lists.fd.io" Subject: R

Re: [vpp-dev] LACP link bonding issue

2018-08-17 Thread steven luong via Lists.Fd.Io
Aleksander, I found the CLI bug. You can easily workaround with it. Please set the physical interface state up first in your CLI sequence and it will work. create bond mode lacp load-balance l23 bond add BondEthernet0 GigabitEtherneta/0/0 bond add BondEthernet0 GigabitEtherneta/0/1 set interface

Re: [vpp-dev] LACP link bonding issue

2018-08-16 Thread steven luong via Lists.Fd.Io
Aleksander, This problem should be easy to figure out if you can gdb the code. When the very first slave interface is added to the bonding group via the command “bond add BondEthernet0 GigabitEthnerneta/0/0/1”, - The PTX machine schedules the interface with the periodic timer via lacp_schedule

Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
This configuration is not supported in VPP. Steven From: on behalf of Aleksander Djuric Date: Wednesday, August 15, 2018 at 12:33 AM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] LACP link bonding issue In addition.. I have tried to configure LACP in dpdk section of vpp startup.conf.. an

Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
Aleksander, The problem is LACP periodic timer is not running as shown in your output. I wonder if lacp-process is launched properly or got stuck. Could you please do show run and check on the health of lacp-process? periodic timer: not running Steven From: on behalf of Aleksander Djuri

Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
I forgot to ask if these 2 boxes’ interfaces are connected back to back or through a switch. Steven From: on behalf of "steven luong via Lists.Fd.Io" Reply-To: "Steven Luong (sluong)" Date: Tuesday, August 14, 2018 at 8:24 AM To: Aleksander Djuric , "vpp-dev@li

Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
Aleksander It looks like the LACP packets are not going out to the interfaces as expected or being dropped. Additional output and trace are needed to determine why. Please collect the following from both sides. clear hardware clear error wait a few seconds show hardware show error show lacp d

Re: [vpp-dev] tx-drops with vhost-user interface

2018-08-06 Thread steven luong via Lists.Fd.Io
Vijay, From the show output, I can’t really tell what your problem is. If you could provide additional information about your environment, I could try setting it up and see what’s wrong. Things I need from you are exact VPP version, VPP configuration, qemu startup command line or the XML startu

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread steven luong via Lists.Fd.Io
>>> argv=) >>> at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/unix/main.c:632 >>> #19 0x0019 in ?? () >>> #20 0x00e7 in ?? () >>> #21 0x0831 in ?? () >>> #22 0x7fb08e9aac00 in ?? () fr

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
> Jun 5 19:23:35 [18916]: vhost_user_socket_read:1037: if 3 msg > VHOST_USER_SET_VRING_KICK 0 > Jun 5 19:23:35 [18916]: vhost_user_socket_read:932: if 3 msg > VHOST_USER_SET_VRING_NUM idx 1 num 256 > Jun 5 19:23:35 [18916]: vhost_user_socket_read:1096: if 3 msg > VHOST_US

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
rtup.conf root@867dc128b544:~/dpdk# show version (on both host and container). vpp v18.04-rc2~26-gac2b736~b45 built by root on 34a554d1c194 at Wed Apr 25 14:53:07 UTC 2018 vpp# Thanks. On Tue, Jun 5, 2018 at 9:23 AM, Steven Luong (sluong) wrote: > Ravi, >

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
th=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02 but it doesn't work. Thanks. On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong) wrote: > Ravi, > > VPP only supports vhost-user in the device mode. In your example, the host, in device mode,

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-04 Thread steven luong via Lists.Fd.Io
.1.2/24 vpp# vpp# ping 192.168.1.1 Statistics: 5 sent, 0 received, 100% packet loss vpp# On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong) wrote: > show interface and look for the counter and count columns for the corresponding interface. >

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
pdk vhost interfaces). I have one question is there a way to read vhost-user statistics counter (Rx/Tx) on vpp? I only know 'show vhost-user ' and 'show vhost-user descriptors' which doesn't show any counters. Thanks. On Thu, M

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
irtio-user (in a container) -- VPP crashes and in DPDK Thanks. On Thu, May 31, 2018 at 10:12 AM, Steven Luong (sluong) wrote: > Ravi, > > I've proved my point -- there is a problem in the way that you invoke testpmd. The shared memory region t

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
entries = *((volatile uint16_t *)&vq->avail->idx) - (gdb) p vq $1 = (struct vhost_virtqueue *) 0x7fc3ffc84b00 (gdb) p vq->avail $2 = (struct vring_avail *) 0x7ffbfff98000 (gdb) p *$2 Cannot access memory at address 0x7ffbfff98000 (gdb) Thanks.

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
VhostEthernet interface, try to send some traffic through it to see if it crashes or not. Steven On 5/30/18, 9:17 PM, "vpp-dev@lists.fd.io on behalf of Steven Luong (sluong)" wrote: Ravi, I don't think you can declare (2) works fine yet. Please bring up the dpdk vhost-

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-30 Thread steven luong
local00down since it is a static mapping I am assuming it should be created, correct? Thanks. On Wed, May 30, 2018 at 3:43 PM, Steven Luong (sluong) wrote: > Ravi, > > First and foremost, get rid of the feature-mask opt

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-30 Thread steven luong
ersion ArchitectureDescription > +++--===-===-== > ii corekeeper 1.6 amd64 enable core > files and report crashes to the system > >

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-29 Thread steven luong
Ravi, I install corekeeper and the core file is kept in /var/crash. But why not use gdb to attach to the VPP process? To turn on VPP vhost-user debug, type "debug vhost-user on" at the VPP prompt. Steven On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" wrote: Hi Marco

Re: [vpp-dev] change vhost-user queue size

2018-05-16 Thread steven luong
Yuliang, Queue size is controlled by the driver, not the device which VPP is running as. When you launch the VM via Qemu, at the end of virtio-net-pci, specify rx_queue_size=xxx and/or tx_queue_size=xxx, where xxx is 512 or 1024. You need Qemu-2.8.0 to specify rx_queue_size. You need to get Qem

Re: [vpp-dev] tx-error on create vhost

2018-05-10 Thread steven luong
Aris, From the output of show vhost, the memory regions is 0. The connection between VPP vhost-user and QEMU virtio_net driver does not have a happy ending. Memory regions (total 0) Prior to launching the VM, turn on debug using "debug vhost-user on" in VPP to see more details on the mess

Re: [vpp-dev] tx-error on create vhost

2018-05-09 Thread steven luong
Aris, There is not enough information here. What is VPP's vhost-user interface connected to? A VM launched by QEMU or a docker container running VPP with DPDK virtio_net driver? What do you see in the output from show vhost? Steven On 5/9/18, 1:13 PM, "vpp-dev@lists.fd.io on behalf of arisle

Re: [vpp-dev] virtio devices add to white list

2018-03-28 Thread steven luong
Avi, Yes, you can. As an example, I have it like this in my startup.conf dpdk { vdev virtio_user0,path=/tmp/sock0 } Steven On 3/28/18, 8:09 AM, "vpp-dev@lists.fd.io on behalf of Avi Cohen (A)" wrote: Hi In the startup.conf , in the dpdk part we can add pci devices to the white-li

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-25 Thread steven luong
m-prealloc \ -debugcon file:debug.log -global isa-debugcon.iobase=0x402 -Sara On Thu, Mar 22, 2018 at 6:10 PM, steven luong mailto:slu...@cisco.com>> wrote: Sara, Iperf3 is not blasting traffic fast enough. You could try specifying multiple parallel streams using -P. Then, you w

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread steven luong
Sara, Iperf3 is not blasting traffic fast enough. You could try specifying multiple parallel streams using -P. Then, you will likely encounter vhost-user dropping packets as you are using the default virtqueue size 256. You’ll need to specify rx_queue_size=1024 when you launch the VM in the qem

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread steven luong
Sara, Could you please try again with adding below config to startup.conf? I remember some drivers don’t blast traffic at full speed unless it is getting a kick for each packet sent. I am not sure if you are using one of those. vhost-user { coalesce-frame 0 } Steven From: on behalf of Sara

Re: [vpp-dev] unexpected 65535 rx packets on vhost-user interface

2018-03-15 Thread steven luong
Wuxp, The first thing is to validate your claim that nothing sent from the VM when VPP is restarted. Start tcpdump and watch nothing is sent from the VM. I assume it happens all the times as supposed to once in a blue moon. If that is validated, I’d like to see your qemu startup command to see

Re: [vpp-dev] VPP native vhost-user issue on stable/1710

2018-01-12 Thread Steven Luong (sluong)
Hongjun, VPP only implements the device side for vhost user. You need the driver side. It looks like you are trying to run vhost user in the host as a device and vhost user in the container also as a device. They don’t work that way. Both qemu and dpdk implement driver side that you can use to

Re: [vpp-dev] bond

2018-01-03 Thread Steven Luong (sluong)
Yes, I started adding LACP in VPP native mode, not tied to DPDK. So you can actually enslave additional NICs using CLI without restarting VPP. Steven From: on behalf of "yug...@telincn.com" Date: Wednesday, January 3, 2018 at 7:41 PM To: "John Lo (loj)" , otroan Cc: vpp-dev Subject: Re: [vp

Re: [vpp-dev] VPP low throughput

2017-11-01 Thread Steven Luong (sluong)
Avi, You can tune a number of things to get higher throughput - Using VPP worker threads - Up the queue size to 1024 from the default 256 (done via qemu launch) - Enable multi-queues (done via qemu launch) - Up the number of streams when invoking iperf3 using –P Nevertheless, 10s Mbps is unusual

Re: [vpp-dev] VPP API for interface rx-mode polling

2017-10-29 Thread Steven Luong (sluong)
Jozel, Here is the patch. Please help verifying it. https://gerrit.fd.io/r/#/c/9094/ Steven From: on behalf of "Steven Luong (sluong)" Date: Friday, October 27, 2017 at 9:00 AM To: Jozef Glončák , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] VPP API for interface rx

Re: [vpp-dev] Number of interfaces configurable restriction

2017-10-27 Thread Steven Luong (sluong)
Jan, You have to use ulimit –n to raise the number of open files if you are going to create a lot of interfaces. Steven From: on behalf of "Jan Srnicek -X (jsrnicek - PANTHEON TECHNOLOGIES at Cisco)" Date: Friday, October 27, 2017 at 3:31 AM To: vpp-dev Subject: [vpp-dev] Number of interfac

Re: [vpp-dev] VPP API for interface rx-mode polling

2017-10-27 Thread Steven Luong (sluong)
Jozef, Sure. I’ll take care of this. Steven From: on behalf of Jozef Glončák Date: Friday, October 27, 2017 at 3:01 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] VPP API for interface rx-mode polling Hello VPP gurus, I need VPP API for this command set interface rx-mode polling. I

Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK

2017-10-17 Thread Steven Luong (sluong)
, "Steven Luong (sluong)" , Guo Ruijing , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK Hi Steven, we are also looking at using dpdk based virtualethernet interface, so looking back at this thread i was wondering what

Re: [vpp-dev] [vhost-user][armv8][v17.07.01] VM ping not working properly

2017-09-29 Thread Steven Luong (sluong)
stable/1710 to see if it helps because that is what I use. I am afraid that I don’t have any more tricks in my sleeve to offer useful help. Sorry about that. Steven From: "Saxena, Nitin" Date: Thursday, September 28, 2017 at 6:56 PM To: "Steven Luong (sluong)" Cc: "

  1   2   >