Hi Dom,

I would actually recommend testing with iperf because it should not be slower 
than the builtin echo server/client apps. Remember to add fifo-size to your 
echo apps cli commands (something like fifo-size 4096 for 4MB) to increase the 
fifo sizes. 

Also note that you’re trying full-duplex testing. To check half-duplex, add 
no-echo to the server and no-return to client (or the other way around - in an 
airport and can’t remember the exact cli). We should probably make half-duplex 
default. 

I’m surprised that iperf reports throughput as small as the echo apps. Did you 
check that fifo sizes are 16MB as configured and that snd_wnd/rcv_wnd/cwnd 
reported by “show session verbose 2” are the right size?

As for the checksum issues you’re hitting, I agree. It might be that tcp 
checksum offloading does not work properly with your interfaces. 

Regards,
Florin

> On Dec 4, 2019, at 2:18 PM, dch...@akouto.com wrote:
> 
> It turns out I was using DPDK virtio, with help from Moshin I changed the 
> configuration and tried to repeat the tests using VPP native virtio, results 
> are similar but there are some interesting new observations, sharing them 
> here in case they are useful to others or trigger any ideas. 
> 
> After configuring both instances to use VPP native virtio, I used the 
> built-in echo test to see what throughput I would get, and I got the same 
> results as the modified external tcp_echo, i.e. about 600 Mbps:
> Added dpdk { no-pci } to startup.conf and configured the interface using 
> create int virtio <pci-address> as per instructions from Moshin, confirmed 
> settings with show virtio pci command
> Ran the built-in test echo application to transfer 1 GB of data and got the 
> following results:
> vpp# test echo clients gbytes 1 uri tcp://10.0.0.153/5556
> 1 three-way handshakes in 0.00 seconds 2288.06/s
> Test started at 1255.753237
> Test finished at 1272.863244
> 1073741824 bytes (1024 mbytes, 1 gbytes) in 17.11 seconds
> 62755195.55 bytes/second full-duplex
> .5020 gbit/second full-duplex
> I then used iperf3 with VCL on both sides and got roughly the same results 
> (620 Mbps)
> Then I rebooted the client VM and use native Linux networking on the client 
> side with VPP on the server side, and try to repeat the iperf test
> When I use VPP-native virtio on the server side, the iperf test fails, 
> packets are dropped on the server (VPP) side, doing a trace shows packets are 
> dropped because of "bad tcp checksum"
> I then switch the server side to use DPDK virtio, the iperf test works and I 
> get 3 Gbps throughput
> So, the big performance problem is on the client (sender) side, with VPP only 
> able to get around 600 Mbps out for some reason, even when using the built-in 
> test echo application. I'm continuing my investigation to see where the 
> bottleneck is, any other ideas on where to look would be greatly appreciated.
> 
> Also, there may be a checksum bug in the VPP-native virtio driver since the 
> packets are not dropped on the server side when using the DPDK virtio driver. 
> I'd be happy to help gather more details on this, create a JIRA ticket and 
> even contribute a fix but wanted to check before going down that road, any 
> thoughts or comments?
> 
> Thanks again for all the help so far!
> 
> Regards,
> Dom
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14801): https://lists.fd.io/g/vpp-dev/message/14801
> Mute This Topic: https://lists.fd.io/mt/65863639/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14805): https://lists.fd.io/g/vpp-dev/message/14805
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to