> -----Original Message----- > From: David Marchand <david.march...@redhat.com> > Sent: 2021年5月12日 23:20 > To: Wang, Yinan <yinan.w...@intel.com> > Cc: dev@dpdk.org; maxime.coque...@redhat.com; > olivier.m...@6wind.com; f...@sysclose.org; i.maxim...@ovn.org; Xia, > Chenbo <chenbo....@intel.com>; Stokes, Ian <ian.sto...@intel.com>; > sta...@dpdk.org; Jijiang Liu <jijiang....@intel.com>; Yuanhan Liu > <yuanhan....@linux.intel.com> > Subject: Re: [dpdk-dev] [PATCH v4 3/3] vhost: fix offload flags in Rx path > > On Wed, May 12, 2021 at 5:30 AM Wang, Yinan <yinan.w...@intel.com> > wrote: > > > > Hi David, > > > > Since vhost tx offload can’t work now, we report a Bugzilla as below, could > you help to take a look? > > https://bugs.dpdk.org/show_bug.cgi?id=702 > > (I discovered your mail from 05/08 only today, now that I got a new > mail, might be a pebcak from me, sorry...) > > > - Looking at the bz, there is a first issue/misconception. > testpmd only does TSO or any kind of tx offloading with the csum forward > engine. > The iofwd engine won't make TSO possible. > > > - Let's say we use the csum fwd engine, testpmd configures drivers > through the ethdev API. > The ethdev API states that no offloading is enabled unless requested > by the application. > TSO, l3/l4 checksums offloading are documented as: > https://doc.dpdk.org/guides/nics/features.html#l3-checksum-offload > https://doc.dpdk.org/guides/nics/features.html#lro > > But the vhost pmd does not report such capabilities. > https://git.dpdk.org/dpdk/tree/drivers/net/vhost/rte_eth_vhost.c#n1276 > > So we can't expect testpmd to have tso working with net/vhost pmd. > > > - The csum offloading engine swaps mac addresses. > I would expect issues with inter vm traffic. > > > In summary, I think this is a bad test. > If it worked with the commands in the bugzilla before my change (which > I doubt), it was wrong.
Thanks your kindly explanation. Before this patch, vhost can declare tso offload, if we configure TSO/csum in Qemu, tso offload flags can be marked, such vm2vm can fwd large pkts (64k when using iperf) with iofwd. Now I am understand this case will not work later, we can move to using vswitch. > > > We also tried vhost example with VM2VM iperf test, large pkts also can't > forwarding. > > "large pkts", can you give details? > > I tried to use this example, without/with my change, but: > > When I try to start this example with a physical port and two vhosts, > I get a crash (division by 0 on vdmq stuff). > When I start it without a physical port, I get a complaint about no > port being enabled. > Passing a portmask 0x1 seems to work, the example starts but, next, no > traffic is forwarded (not even arp). > Hooking gdb, I never get packet dequeued from vhost. I re-test with vswitch, vm2vm iperf test can work w/ and w/o this patch. Sorry for the wrong result about vhost example before. There are some special configuration in vswitch sample. Test steps can work as below: 1. Modify the testpmd code as following:: --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -29,7 +29,7 @@ #include "main.h" #ifndef MAX_QUEUES -#define MAX_QUEUES 128 +#define MAX_QUEUES 512 #endif /* the maximum number of external ports supported */ 2. Bind one physical ports to vfio-pci, launch dpdk-vhost by below command:: ./dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 3. Start VM0:: /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=/tmp/vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 4. Start VM1:: /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=/tmp/vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 5. On VM1, set virtio device IP and run arp protocal:: ifconfig ens5 1.1.1.2 arp -s 1.1.1.8 52:54:00:00:00:02 6. On VM2, set virtio device IP and run arp protocal:: ifconfig ens5 1.1.1.8 arp -s 1.1.1.2 52:54:00:00:00:01 7. Check the iperf performance with different packet size between two VMs by below commands:: Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` > > > -- > David Marchand