Re: [vpp-dev] unexpected 65535 rx packets on vhost-user interface

2018-03-15 Thread steven luong
Wuxp, The first thing is to validate your claim that nothing sent from the VM when VPP is restarted. Start tcpdump and watch nothing is sent from the VM. I assume it happens all the times as supposed to once in a blue moon. If that is validated, I’d like to see your qemu startup command to see

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread steven luong
Sara, Could you please try again with adding below config to startup.conf? I remember some drivers don’t blast traffic at full speed unless it is getting a kick for each packet sent. I am not sure if you are using one of those. vhost-user { coalesce-frame 0 } Steven From: on behalf of Sara

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread steven luong
Sara, Iperf3 is not blasting traffic fast enough. You could try specifying multiple parallel streams using -P. Then, you will likely encounter vhost-user dropping packets as you are using the default virtqueue size 256. You’ll need to specify rx_queue_size=1024 when you launch the VM in the qem

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-25 Thread steven luong
m-prealloc \ -debugcon file:debug.log -global isa-debugcon.iobase=0x402 -Sara On Thu, Mar 22, 2018 at 6:10 PM, steven luong mailto:slu...@cisco.com>> wrote: Sara, Iperf3 is not blasting traffic fast enough. You could try specifying multiple parallel streams using -P. Then, you w

Re: [vpp-dev] virtio devices add to white list

2018-03-28 Thread steven luong
Avi, Yes, you can. As an example, I have it like this in my startup.conf dpdk { vdev virtio_user0,path=/tmp/sock0 } Steven On 3/28/18, 8:09 AM, "vpp-dev@lists.fd.io on behalf of Avi Cohen (A)" wrote: Hi In the startup.conf , in the dpdk part we can add pci devices to the white-li

Re: [vpp-dev] tx-error on create vhost

2018-05-09 Thread steven luong
Aris, There is not enough information here. What is VPP's vhost-user interface connected to? A VM launched by QEMU or a docker container running VPP with DPDK virtio_net driver? What do you see in the output from show vhost? Steven On 5/9/18, 1:13 PM, "vpp-dev@lists.fd.io on behalf of arisle

Re: [vpp-dev] tx-error on create vhost

2018-05-10 Thread steven luong
Aris, From the output of show vhost, the memory regions is 0. The connection between VPP vhost-user and QEMU virtio_net driver does not have a happy ending. Memory regions (total 0) Prior to launching the VM, turn on debug using "debug vhost-user on" in VPP to see more details on the mess

Re: [vpp-dev] change vhost-user queue size

2018-05-16 Thread steven luong
Yuliang, Queue size is controlled by the driver, not the device which VPP is running as. When you launch the VM via Qemu, at the end of virtio-net-pci, specify rx_queue_size=xxx and/or tx_queue_size=xxx, where xxx is 512 or 1024. You need Qemu-2.8.0 to specify rx_queue_size. You need to get Qem

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-29 Thread steven luong
Ravi, I install corekeeper and the core file is kept in /var/crash. But why not use gdb to attach to the VPP process? To turn on VPP vhost-user debug, type "debug vhost-user on" at the VPP prompt. Steven On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" wrote: Hi Marco

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-30 Thread steven luong
ersion ArchitectureDescription > +++--===-===-== > ii corekeeper 1.6 amd64 enable core > files and report crashes to the system > >

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-30 Thread steven luong
local00down since it is a static mapping I am assuming it should be created, correct? Thanks. On Wed, May 30, 2018 at 3:43 PM, Steven Luong (sluong) wrote: > Ravi, > > First and foremost, get rid of the feature-mask opt

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
VhostEthernet interface, try to send some traffic through it to see if it crashes or not. Steven On 5/30/18, 9:17 PM, "vpp-dev@lists.fd.io on behalf of Steven Luong (sluong)" wrote: Ravi, I don't think you can declare (2) works fine yet. Please bring up the dpdk vhost-

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
entries = *((volatile uint16_t *)&vq->avail->idx) - (gdb) p vq $1 = (struct vhost_virtqueue *) 0x7fc3ffc84b00 (gdb) p vq->avail $2 = (struct vring_avail *) 0x7ffbfff98000 (gdb) p *$2 Cannot access memory at address 0x7ffbfff98000 (gdb) Thanks.

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
irtio-user (in a container) -- VPP crashes and in DPDK Thanks. On Thu, May 31, 2018 at 10:12 AM, Steven Luong (sluong) wrote: > Ravi, > > I've proved my point -- there is a problem in the way that you invoke testpmd. The shared memory region t

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
pdk vhost interfaces). I have one question is there a way to read vhost-user statistics counter (Rx/Tx) on vpp? I only know 'show vhost-user ' and 'show vhost-user descriptors' which doesn't show any counters. Thanks. On Thu, M

Re: [vpp-dev] VPP interface parsing error?

2017-06-29 Thread Steven Luong (sluong)
Yichen, We need more information for the problem as described. Does the problem just show up when you configure VPP? If that is the case, please unicast me the config and procedure to recreate the problem. If it is difficult to reproduce, please run the debug image with gdb and I can work with

Re: [vpp-dev] Qemu error vhost-user Failed initializing vhost-user memory map

2017-07-11 Thread Steven Luong (sluong)
Divya, Please try this. I highlighted the difference. ./qemu-system-x86_64 -enable-kvm -m 512 -bios /home/cavium/Downloads/OVMF.fd -smp 4 -cpu host -vga none -nographic -drive file="/home/cavium/Downloads/clear-16300-kvm.img",if=virtio,aio=threads -chardev socket,id=char1,path=/tmp/sock1.sock

[vpp-dev] vhost user API changes for 17.10

2017-08-29 Thread Steven Luong (sluong)
Folks, Please be aware that there is a slight change in the vhost user API to create/delete/modify a vhost user interface for 17.10 from the below patch https://gerrit.fd.io/r/8226 The change is to remove an unused parameter (operation_mode). The operation mode (interrupt/polling/adaptive) is

Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK

2017-09-11 Thread Steven Luong (sluong)
If you create the VIrtualEthernet interface via the CLI or binary API, it always uses VPP native. If you create the virtual interface via vdev in the dpdk clause in the startup file, it uses dpdk’s vhost-user. The problem is in DPDK virtio-user that they don’t comply with virtio1.0 spec. I subm

Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK

2017-09-15 Thread Steven Luong (sluong)
From: "Saxena, Nitin" Date: Friday, September 15, 2017 at 1:40 AM To: "Steven Luong (sluong)" , "Guo, Ruijing" , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK Thanks Steven and Ruijing. I wan

Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK

2017-09-15 Thread Steven Luong (sluong)
From: "Ni, Hongjun" Date: Friday, September 15, 2017 at 6:11 AM To: "Saxena, Nitin" , "Steven Luong (sluong)" , "Guo, Ruijing" , "vpp-dev@lists.fd.io" , "Tan, Jianfeng" Subject: RE: [vpp-dev] vhost-user stable implementation?: VPP

Re: [vpp-dev] [vhost-user][armv8][v17.07.01] VM ping not working properly

2017-09-28 Thread Steven Luong (sluong)
also try ipv4 pings. I got it running and pings went through on ThunderX last week with ipv4. I’ll try ipv6 later today to see if the problem is specific to ipv6. Steven From: "Saxena, Nitin" Date: Thursday, September 28, 2017 at 1:23 AM To: "vpp-dev@lists.fd.io" , "St

Re: [vpp-dev] [vhost-user][armv8][v17.07.01] VM ping not working properly

2017-09-28 Thread Steven Luong (sluong)
me I did ping 11.0.0.2 -c 10 and captured 10 packets in trace. Thanks, Nitin From: vpp-dev-boun...@lists.fd.io on behalf of Saxena, Nitin Sent: Thursday, September 28, 2017 9:32 PM To: Steven Luong (sluong) Cc: Narayana, Prasad Athreya; vpp-dev@lists.fd.io Subj

Re: [vpp-dev] [vhost-user][armv8][v17.07.01] VM ping not working properly

2017-09-28 Thread Steven Luong (sluong)
-learnL2 learn hit updates 17l2-inputL2 input packets 17l2-floodL2 flood packets DBGvpp# Steven From: "Saxena, Nitin" Date: Thursday, September 28, 2017 at 5:44 PM To: "Steven Luong (sluong)&qu

Re: [vpp-dev] [vhost-user][armv8][v17.07.01] VM ping not working properly

2017-09-29 Thread Steven Luong (sluong)
stable/1710 to see if it helps because that is what I use. I am afraid that I don’t have any more tricks in my sleeve to offer useful help. Sorry about that. Steven From: "Saxena, Nitin" Date: Thursday, September 28, 2017 at 6:56 PM To: "Steven Luong (sluong)" Cc: "

Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK

2017-10-17 Thread Steven Luong (sluong)
, "Steven Luong (sluong)" , Guo Ruijing , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK Hi Steven, we are also looking at using dpdk based virtualethernet interface, so looking back at this thread i was wondering what

Re: [vpp-dev] VPP API for interface rx-mode polling

2017-10-27 Thread Steven Luong (sluong)
Jozef, Sure. I’ll take care of this. Steven From: on behalf of Jozef Glončák Date: Friday, October 27, 2017 at 3:01 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] VPP API for interface rx-mode polling Hello VPP gurus, I need VPP API for this command set interface rx-mode polling. I

Re: [vpp-dev] Number of interfaces configurable restriction

2017-10-27 Thread Steven Luong (sluong)
Jan, You have to use ulimit –n to raise the number of open files if you are going to create a lot of interfaces. Steven From: on behalf of "Jan Srnicek -X (jsrnicek - PANTHEON TECHNOLOGIES at Cisco)" Date: Friday, October 27, 2017 at 3:31 AM To: vpp-dev Subject: [vpp-dev] Number of interfac

Re: [vpp-dev] VPP API for interface rx-mode polling

2017-10-29 Thread Steven Luong (sluong)
Jozel, Here is the patch. Please help verifying it. https://gerrit.fd.io/r/#/c/9094/ Steven From: on behalf of "Steven Luong (sluong)" Date: Friday, October 27, 2017 at 9:00 AM To: Jozef Glončák , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] VPP API for interface rx

Re: [vpp-dev] VPP low throughput

2017-11-01 Thread Steven Luong (sluong)
Avi, You can tune a number of things to get higher throughput - Using VPP worker threads - Up the queue size to 1024 from the default 256 (done via qemu launch) - Enable multi-queues (done via qemu launch) - Up the number of streams when invoking iperf3 using –P Nevertheless, 10s Mbps is unusual

Re: [vpp-dev] bond

2018-01-03 Thread Steven Luong (sluong)
Yes, I started adding LACP in VPP native mode, not tied to DPDK. So you can actually enslave additional NICs using CLI without restarting VPP. Steven From: on behalf of "yug...@telincn.com" Date: Wednesday, January 3, 2018 at 7:41 PM To: "John Lo (loj)" , otroan Cc: vpp-dev Subject: Re: [vp

Re: [vpp-dev] VPP native vhost-user issue on stable/1710

2018-01-12 Thread Steven Luong (sluong)
Hongjun, VPP only implements the device side for vhost user. You need the driver side. It looks like you are trying to run vhost user in the host as a device and vhost user in the container also as a device. They don’t work that way. Both qemu and dpdk implement driver side that you can use to

Re: [vpp-dev] Can not see any interface but local0

2016-12-02 Thread Steven Luong (sluong)
I follow this wiki page and it works fine for me. Not sure if you are doing the same or not. https://wiki.fd.io/view/VPP/How_To_Connect_A_PCI_Interface_To_VPP Steven From: mailto:vpp-dev-boun...@lists.fd.io>> on behalf of 王鹏 <15803846...@163.com> Date: Thursday, Dec

Re: [vpp-dev] Error while building a Debian package for VPP

2016-12-02 Thread Steven Luong (sluong)
I ran into the same problem for a while. There’s been multiple reports about this breakage recently in the alias. Keith gave a recommendation a week ago. But it didn’t work for me either. Until the real solution/fix is available, I figured out a workaround myself and it is good enough for me. I

[vpp-dev] patch not uploaded in gerrit.

2017-03-06 Thread Steven Luong (sluong)
I revised the patch last night for https://gerrit.fd.io/r/#/c/5566/ But I still don't see it in gerrit this morning. Below is the console output that I got last night. Is gerrit not working or did I not do it correctly? SLUONG-M-612A:vpp sluong$ git commit -s --amend [review/steven_luong/vho

Re: [vpp-dev] Fwd: [Bug 1422534] vdev->vq[i] .used_idx does not consider the right value for vhostuser

2017-03-08 Thread Steven Luong (sluong)
Maxime, Last I heard csit uses qemu 2.5. Could you please check if the aforementioned patch exists in that qemu version? Steven On 3/8/17, 7:36 AM, "vpp-dev-boun...@lists.fd.io on behalf of Maxime Coquelin" wrote: >Tom, > >On 03/08/2017 03:59 PM, Thomas F Herbert wrote: >> Maxime, >> >> The >>

Re: [vpp-dev] Fwd: [Bug 1422534] vdev->vq[i] .used_idx does not consider the right value for vhostuser

2017-03-08 Thread Steven Luong (sluong)
, please point me to read about it. Steven On 3/8/17, 8:10 AM, "Maxime Coquelin" wrote: >Steven, > >On 03/08/2017 04:44 PM, Steven Luong (sluong) wrote: >> Maxime, >> >> Last I heard csit uses qemu 2.5. Could you please check if the >> aforementi

Re: [vpp-dev] Fwd: [Bug 1422534] vdev->vq[i] .used_idx does not consider the right value for vhostuser

2017-03-09 Thread Steven Luong (sluong)
Maxime, Yes, it does. Thank you for digging it out. Steven On 3/9/17, 12:10 AM, "Maxime Coquelin" wrote: >Hi Steven, > >On 03/08/2017 08:10 PM, Steven Luong (sluong) wrote: >> Maxime, >> >> First of all, thank you very much for your detailed analysis a

Re: [vpp-dev] Build broken?

2017-03-13 Thread Steven Luong (sluong)
Ratliff, Try this. >From the workspace root make dpdk-install-dev cd dpdk sudo dpkg -i *.deb After that, you build vpp. Steven On 3/13/17, 1:05 PM, "vpp-dev-boun...@lists.fd.io on behalf of Ratliff, Stanley" wrote: >Hello all, > >This afternoon, I cloned a view and ran the build script.

Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-20 Thread Steven Luong (sluong)
Eric, Something stands out to me is the feature-mask 0xff when you create the vhost interface. Why do you specify 0xff? While I am checking 17.01 to see how this feature-mask 0xff works, could you please try it without the feature-mask? It does not look like feature-mask 0xff would work in 17.0

Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-20 Thread Steven Luong (sluong)
Eric, As a first step, please share the output of iperf3 to see how many retransmissions that you have for the run. From VPP, please collect show errors to see if vhost drops anything. As an additional data point for comparison, please also try disabling vhost coalesce to see if you get better

Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-20 Thread Steven Luong (sluong)
0 MBytes 51.2 Mbits/sec0 sender [ 4] 0.00-10.00 sec 60.3 MBytes 50.6 Mbits/sec receiver iperf Done. - -Original Message----- From: Steven Luong (sluong) [mailto:slu...@cisco.com] S

Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-20 Thread Steven Luong (sluong)
tation systemd[1]: vpp.service: Service hold-off time over, scheduling restart. Apr 20 17:17:06 eernstworkstation systemd[1]: Stopped vector packet processing engine. -Original Message- From: Steven Luong (sluong) [mailto:slu...@cisco.com] Sent: Th

Re: [vpp-dev] should socket be deleted after vhost-user rm?

2017-04-21 Thread Steven Luong (sluong)
I’ll submit a patch to the master. Steven On 4/21/17, 8:35 AM, "vpp-dev-boun...@lists.fd.io on behalf of Damjan Marion (damarion)" wrote: > On 21 Apr 2017, at 00:02, Ernst, Eric wrote: > > Is it expected that the socket be kept on filesystem after the vhost-user interface

Re: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting address 0x0

2019-05-29 Thread steven luong via Lists.Fd.Io
Clueless with useless tracebacks. Please hook up gdb and get the complete human-readable backtrace. Steven From: on behalf of Mostafa Salari Date: Wednesday, May 29, 2019 at 10:24 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting address

Re: [vpp-dev] some questions about LACP(link bonding mode 4)

2019-06-12 Thread steven luong via Lists.Fd.Io
dev@lists.fd.io" , "Steven Luong (sluong)" , "Carter, Thomas N" Cc: "Kinsella, Ray" Subject: some questions about LACP(link bonding mode 4) Hi Steven and VPP guys, I’m studying the lacp implementation. and want to know if it is possible that Numa is consid

Re: [vpp-dev] some questions about LACP(link bonding mode 4)

2019-06-13 Thread steven luong via Lists.Fd.Io
Yes on both counts. From: on behalf of Zhiyong Yang Date: Wednesday, June 12, 2019 at 10:33 PM To: "Yang, Zhiyong" , "Steven Luong (sluong)" , "vpp-dev@lists.fd.io" , "Carter, Thomas N" Cc: "Kinsella, Ray" Subject: Re: [vpp-dev] some ques

Re: [vpp-dev] Many "tx packet drops (no available descriptors)" #vpp

2019-07-11 Thread steven luong via Lists.Fd.Io
Packet drops due to “no available descriptors” for vhost-user interface is extremely likely when doing performance test with qemu’s default vring queue size. You need to specify the vring queue size of 1024, default is 256, when you bring up the VM. The queue size can be specified either via XML

Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-virtio-user interfaces

2019-07-28 Thread steven luong via Lists.Fd.Io
The debug CLI was replaced by set logging class vhost-user level debug Use show log to view the messages. Did you configure 1GB huge on the container? It used to be that dpdk virtio requires 1GB huge page. Not sure if it is still the case nowadays. If you use VPP 19.04 or later, you could try VP

Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-virtio-user interfaces

2019-07-29 Thread steven luong via Lists.Fd.Io
create interface virtio Or just use memif interface. That is what it is built for. Steven From: on behalf of "mojtaba.eshghi" Date: Monday, July 29, 2019 at 5:50 AM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-vi

Re: [vpp-dev] VPP with DPDK vhost ---- VPP with DPDK virtio

2019-08-12 Thread steven luong via Lists.Fd.Io
Using VPP+DPDK virtio to connect with VPP + vhost-user is not actively maintained. I got it working couple years ago by committing some changes to the DPDK virtio code. Since then, I’ve not been playing with it anymore. Breakage is possible. I could spend a whole week on it to get it working aga

Re: [vpp-dev] #vpp #bond How to config bond mode in vpp?

2020-01-03 Thread steven luong via Lists.Fd.Io
DPDK bonding is no longer supported in 19.08. However, you can use VPP native bonding to accomplish the same thing. create bond mode active-backup load-balance l34 set interface state BondEthernet0 up bond add BondEthernet0 GigabitEthernet1/0/0 bond add BondEthernet0 GigabitEthernet1/0/1 Steven

Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-06 Thread steven luong via Lists.Fd.Io
It is likely a resource problem – when VPP requests more descriptors and/or TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the interface. There are few ways to figure out what the problem is. 1. Bypass VPP and run testpmd with debug options turned on, something like

Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-07 Thread steven luong via Lists.Fd.Io
So you now know what command in the dpdk section that dpdk doesn’t like. Try adding “log-level debug” in the dpdk section of startup.conf to see if you can find more helpful messages in “vppctl show log” from dpdk why it fails to probe the NIC. Steven From: on behalf of Gencli Liu <18600640...

Re: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec

2020-02-24 Thread steven luong via Lists.Fd.Io
It works for me although I am on ubuntu 1804 VM. Your statement is unclear to me if your problem is strictly related to more than 4 rx-queues or not when you say “but when i try to associate more than 4 num-rx-queues i get error” Does it work fine when you reduce the number of rx-queues less tha

Re: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec

2020-02-25 Thread steven luong via Lists.Fd.Io
From: on behalf of "ravinder.ya...@hughes.com" Date: Tuesday, February 25, 2020 at 7:27 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec [Edited Message Follows] VPP IPsec responder on ESXI VM RHEL 7.6 Is there

Re: [vpp-dev] Unknown input `ping' #vpp

2020-03-26 Thread steven luong via Lists.Fd.Io
Ping command has been moved to a separate plugin. You probably didn’t have the ping plugin enable in your startup.conf. Please add the ping plugin to your startup.conf. Something like this will do the trick. plugins { … plugin ping_plugin.so { enable } } From: on behalf of "mauricio.solisj

[vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Folks, It looks like jobs for all branches, 19.08, 20.01, and master, are failing due to this inspect.py error. Could somebody who is familiar with the issue please take a look at it? 18:59:12 Exception occurred: 18:59:12 File "/usr/lib/python3.6/inspect.py", line 516, in unwrap 18:59:12

Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
master https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/19049/console 20.01 https://jenkins.fd.io/job/vpp-make-test-docs-verify-2001/61/console Steven From: Paul Vinciguerra Date: Monday, April 6, 2020 at 8:35 AM To: Paul Vinciguerra Cc: "Steven Luong (sluong)"

Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Dear Andrew, I confirm that master has been rescued and reverted from “lockdown” back to “normal”. Please proceed the “disinfection process” on 19.08 and 20.01 if you will. Steven From: Andrew 👽 Yourtchenko Date: Monday, April 6, 2020 at 8:09 AM To: "Steven Luong (sluong)" Cc

Re: [vpp-dev] Unknown input `tap connect` #vpp

2020-04-14 Thread steven luong via lists.fd.io
tapcli has been deprecated few releases ago. It has been replaced by virtio over tap. The new cli is create tap … Steven From: on behalf of "mauricio.solisjr via lists.fd.io" Reply-To: "mauricio.soli...@tno.nl" Date: Tuesday, April 14, 2020 at 3:55 AM To: "vpp-dev@lists.fd.io" Subject: [vp

Re: [vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-20 Thread steven luong via lists.fd.io
First, your question has nothing to do with bonding. Whatever you are seeing is true regardless of bonding configured or not. Show interfaces displays the admin state of the interface. Whenever you set the admin state to up, it is displayed as up regardless of the physical carrier is up or down

Re: [vpp-dev] worker thread deadlock for current master branch, started with commit "bonding: adjust link state based on active slaves"

2020-05-29 Thread steven luong via lists.fd.io
The problem is the aforementioned commit added a call to invoke vnet_hw_interface_set_flags() in the worker thread. That is no can do. We are in the process of reverting the commit. Steven On 5/29/20, 10:02 AM, "vpp-dev@lists.fd.io on behalf of Elias Rudberg" wrote: Hello, We now g

Re: [vpp-dev] Unable to ping vpp interface from outside after configuring vrrp on vpp interface and making it as Master

2020-06-08 Thread steven luong via lists.fd.io
Vmxnet3 is a paravirtualized device. I could be wrong, it does not appear it supports adding virtual mac address. This error returns from dpdk indicates just that. Jun 8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: vrrp_vr_transition_vmac:120: Adding virtual MAC address 00:00:5e:00:01:01 on hard

Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-12 Thread steven luong via lists.fd.io
Please correct the subnet mask first. L3 10.1.1.10/24. <-- system A inet 10.1.1.11 netmask 255.0.0.0 broadcast 10.255.255.255 <--- system B Steven From: on behalf of Manoj Iyer Date: Friday, June 12, 2020 at 12:28 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Need help with setup.. c

Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-15 Thread steven luong via lists.fd.io
ctl show trace Honestly, I’ve never worked with a Broadcom NIC which you are using. tx burst function: bnxt_xmit_pkts rx burst function: bnxt_recv_pkts May the force be with you. Steven From: Manoj Iyer Date: Monday, June 15, 2020 at 7:48 AM To: "Dave Barach (dbarach)" , "St

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-04 Thread steven luong via Lists.Fd.Io
.1.2/24 vpp# vpp# ping 192.168.1.1 Statistics: 5 sent, 0 received, 100% packet loss vpp# On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong) wrote: > show interface and look for the counter and count columns for the corresponding interface. >

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
th=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02 but it doesn't work. Thanks. On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong) wrote: > Ravi, > > VPP only supports vhost-user in the device mode. In your example, the host, in device mode,

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
rtup.conf root@867dc128b544:~/dpdk# show version (on both host and container). vpp v18.04-rc2~26-gac2b736~b45 built by root on 34a554d1c194 at Wed Apr 25 14:53:07 UTC 2018 vpp# Thanks. On Tue, Jun 5, 2018 at 9:23 AM, Steven Luong (sluong) wrote: > Ravi, >

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
> Jun 5 19:23:35 [18916]: vhost_user_socket_read:1037: if 3 msg > VHOST_USER_SET_VRING_KICK 0 > Jun 5 19:23:35 [18916]: vhost_user_socket_read:932: if 3 msg > VHOST_USER_SET_VRING_NUM idx 1 num 256 > Jun 5 19:23:35 [18916]: vhost_user_socket_read:1096: if 3 msg > VHOST_US

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread steven luong via Lists.Fd.Io
>>> argv=) >>> at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/unix/main.c:632 >>> #19 0x0019 in ?? () >>> #20 0x00e7 in ?? () >>> #21 0x0831 in ?? () >>> #22 0x7fb08e9aac00 in ?? () fr

Re: [vpp-dev] tx-drops with vhost-user interface

2018-08-06 Thread steven luong via Lists.Fd.Io
Vijay, From the show output, I can’t really tell what your problem is. If you could provide additional information about your environment, I could try setting it up and see what’s wrong. Things I need from you are exact VPP version, VPP configuration, qemu startup command line or the XML startu

Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
Aleksander It looks like the LACP packets are not going out to the interfaces as expected or being dropped. Additional output and trace are needed to determine why. Please collect the following from both sides. clear hardware clear error wait a few seconds show hardware show error show lacp d

Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
I forgot to ask if these 2 boxes’ interfaces are connected back to back or through a switch. Steven From: on behalf of "steven luong via Lists.Fd.Io" Reply-To: "Steven Luong (sluong)" Date: Tuesday, August 14, 2018 at 8:24 AM To: Aleksander Djuric , "vpp-dev@li

Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
Aleksander, The problem is LACP periodic timer is not running as shown in your output. I wonder if lacp-process is launched properly or got stuck. Could you please do show run and check on the health of lacp-process? periodic timer: not running Steven From: on behalf of Aleksander Djuri

Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
This configuration is not supported in VPP. Steven From: on behalf of Aleksander Djuric Date: Wednesday, August 15, 2018 at 12:33 AM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] LACP link bonding issue In addition.. I have tried to configure LACP in dpdk section of vpp startup.conf.. an

Re: [vpp-dev] LACP link bonding issue

2018-08-16 Thread steven luong via Lists.Fd.Io
Aleksander, This problem should be easy to figure out if you can gdb the code. When the very first slave interface is added to the bonding group via the command “bond add BondEthernet0 GigabitEthnerneta/0/0/1”, - The PTX machine schedules the interface with the periodic timer via lacp_schedule

Re: [vpp-dev] LACP link bonding issue

2018-08-17 Thread steven luong via Lists.Fd.Io
Aleksander, I found the CLI bug. You can easily workaround with it. Please set the physical interface state up first in your CLI sequence and it will work. create bond mode lacp load-balance l23 bond add BondEthernet0 GigabitEtherneta/0/0 bond add BondEthernet0 GigabitEtherneta/0/1 set interface

Re: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface

2018-08-30 Thread steven luong via Lists.Fd.Io
Chandra, Would you mind sharing what you found? You’ve piqued my curiosity. Steven From: "Chandra Mohan, Vijay Mohan" Date: Thursday, August 30, 2018 at 10:18 AM To: "Yichen Wang (yicwang)" , "Steven Luong (sluong)" , "vpp-dev@lists.fd.io" Subject: R

Re: [vpp-dev] [BUG] vhost-user display bug

2018-09-20 Thread steven luong via Lists.Fd.Io
Stephen, Fix for vhost https://gerrit.fd.io/r/14920 I'll take care of vmxnet3 later. Steven On 9/20/18, 10:57 AM, "vpp-dev@lists.fd.io on behalf of Stephen Hemminger" wrote: Why is there not a simple link on FD.io developer web page to report bugs. Reporting bugs page talks abo

Re: [vpp-dev] "Incompatible UPT version” error when running VPP v18.01 with DPDK v17.11 on VMWare with VMXNET3 interface ,ESXI Version 6.5/6.7

2018-10-01 Thread steven luong via Lists.Fd.Io
DPDK is expecting UPT version > 0 and ESXi 6.5/6.7 seems to be returning UPT version 0, when it was queried, which is not a supported version. I am using ESXi 6.0 and it is working fine. You could try ESXi 6.0 to see if it helps. Steven From: on behalf of truring truring Date: Monday, October

Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

2018-10-24 Thread steven luong via Lists.Fd.Io
Are you using VPP native bonding driver or DPDK bonding driver? How do you configure the bonding interface? Please include the configuration and process to recreate the problem. Steven From: on behalf of "saint_sun 孙 via Lists.Fd.Io" Reply-To: "saint_...@aliyun.com" Date: Wednesday, October

Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

2018-10-25 Thread steven luong via Lists.Fd.Io
0:00:00/110.0.0.1 00:00:00:00: error-drop arp-input: Interface is not IP enabled Steven From: on behalf of "saint_sun 孙 via Lists.Fd.Io" Reply-To: "saint_...@aliyun.com" Date: Thursday, October 25, 2018 at 12:25 AM To: "Steven Luong (sluong)" Cc: &qu

Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

2019-02-19 Thread steven luong via Lists.Fd.Io
Anthony, L3 address should be configured on the bond interface, not the slave interface. If there is a switch in between VPP’s physical NICs and the VM, the switch should be configured to do the bonding, not the remote VM. Use show bond to check the bundle is created successfully between VPP an

Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

2019-02-23 Thread steven luong via Lists.Fd.Io
Dear Anthony, Please check the bond interface to see if the active slaves count has any positive number using show bond. Since you didn’t configure LACP on VM2, I believe you’ve not gotten any active slave in VPP. Your solution is to configure a bond interface in VM2 using mode 4 (I believe) if

Re: [vpp-dev] Vpp 1904 does not recognize vmxnet3 interfaces

2019-05-12 Thread steven luong via Lists.Fd.Io
Mostafa, Vmxnet3 NICs are in the blacklist by default. Please specify the vmxnet3 pci’s in the dpdk section of the startup.conf. Steven From: on behalf of Mostafa Salari Date: Sunday, May 12, 2019 at 4:52 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Vpp 1904 does not recognize vmxnet3 int

Re: [vpp-dev] Userspace tcp between two vms using vhost user interface?

2020-07-02 Thread steven luong via lists.fd.io
Inline. From: on behalf of "sadhanakesa...@gmail.com" Date: Thursday, July 2, 2020 at 9:55 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Userspace tcp between two vms using vhost user interface? Hi, there seems like lot of ways to setup userspace tcp with vpp, hoststack , with and without

Re: [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread steven luong via lists.fd.io
I am in the process of pushing a patch to replace master/slave with aggregator/member for the bonding. Steven On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io on behalf of Dave Barach via lists.fd.io" wrote: +1, especially since our next release will be supported for a year, and API name chan

Re: [tsc] [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread steven luong via lists.fd.io
ollet)" Cc: "Steven Luong (sluong)" , "Dave Barach (dbarach)" , "Kinsella, Ray" , Stephen Hemminger , "vpp-dev@lists.fd.io" , "t...@lists.fd.io" , "Ed Warnicke (eaw)" Subject: Re: [tsc] [vpp-dev] Replacing master/slave nomenclature

Re: [vpp-dev] How to do Bond interface configuration as fail_over_mac=active in VPP

2020-07-20 Thread steven luong via lists.fd.io
It is not supported. From: on behalf of Venkatarao M Date: Monday, July 20, 2020 at 8:35 AM To: "vpp-dev@lists.fd.io" Cc: praveenkumar A S , Lokesh Chimbili , Mahesh Sivapuram Subject: [vpp-dev] How to do Bond interface configuration as fail_over_mac=active in VPP Hi all, We are trying bon

Re: [vpp-dev] unformat fails processing > 3 variables

2020-11-27 Thread steven luong via lists.fd.io
You have 17 format tags, but you pass 18 arguments to the unformat function. Is that intentional? Steven From: on behalf of "hemant via lists.fd.io" Reply-To: "hem...@mnkcg.com" Date: Friday, November 27, 2020 at 3:52 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] unformat fails processing

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-02 Thread steven luong via lists.fd.io
Please use gdb to provide a meaningful backtrace. Steven From: on behalf of Eyle Brinkhuis Date: Wednesday, December 2, 2020 at 5:59 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Vpp crashes with core dump vhost-user interface Hi all, In our environment (vpp 20.05.1, ubuntu 18.04.5, networ

Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

2020-12-02 Thread steven luong via lists.fd.io
Bonding cares less whether the traffic is unicast or multicast. It just hashes the packet header and selects one of the members as the outgoing interface. The only bonding mode which it replicates packets across all members is when you create the bonding interface to do broadcast which you didn’

Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

2020-12-06 Thread steven luong via lists.fd.io
When you create the bond interface using either lacp or xor mode, there is an option to specify load-balance l2, l23, or l34 which is equivalent to linux xmit_hash_policy. Steven From: on behalf of "ashish.sax...@hsc.com" Date: Sunday, December 6, 2020 at 3:24 AM To: "vpp-dev@lists.fd.io" S

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Eyle, Can you also show me the qemu command line to bring up the VM? I think it is asking for more than 16 queues. VPP supports up to 16. Steven On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io on behalf of Benoit Ganne (bganne) via lists.fd.io" wrote: Hi Eyle, could you share the associated .

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Eyle, This argument in your qemu command line, queues=16, is over our current limit. We support up to 8. I can submit an improvement patch. But I think it will be master only. Steven From: Eyle Brinkhuis Date: Wednesday, December 9, 2020 at 9:24 AM To: "Steven Luong (sluong)" C

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Right, it should not crash. With the patch, the VM just refuses to come up unless we raise the queue support. Steven On 12/9/20, 10:24 AM, "Benoit Ganne (bganne)" wrote: > This argument in your qemu command line, > queues=16, > is over our current limit. We support up to 8. I can

Re: [vpp-dev] #vpp #vpp-memif #vppcom

2020-12-11 Thread steven luong via lists.fd.io
Can you check the output of show hardware? I suspect the link is down for the corresponding memif interface. Steven From: on behalf of "tahir.a.sangli...@gmail.com" Date: Friday, December 11, 2020 at 1:14 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] #vpp #vpp-memif #vppcom in our applica

Re: [vpp-dev] Multicast packets sent via memif when rule says to forward through another interface

2020-12-17 Thread steven luong via lists.fd.io
show interface displays the interface’s admin state. show hardware displays the interface’s operational link state. The link down is likely caused by memif configuration error. Please check your configuration on both sides to make sure they match. Some tips to debug, show memif set logging class m

Re: [vpp-dev] Blackholed packets after forwarding interface output

2020-12-20 Thread steven luong via lists.fd.io
Additionally, please figure out why carrier is down. It needs to be up. Intel 82599 carrier down Steven From: on behalf of Dave Barach Date: Sunday, December 20, 2020 at 4:58 AM To: 'Merve' , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] Blackholed packets after forwarding interface out

  1   2   >