Hi Kunal,
Yes, it might be worth looking into ip and tcp csum offloads.
But given that mtu ~ 9kB, maybe look into forcing tcp to build jumbo frames,
i.e., tcp { mtu 9000 } in startup.conf. It’ll be needed on both ends and I’m
assuming here network between your two vpp instances supports 9k mt
Hi Florin
Following is the output from
>
> vppctl show hardware-interfaces
>
> Name Idx Link Hardware
> local0 0 down local0
> Link speed: unknown
> local
> vpp0 1 up vpp0
> Link speed: unknown
> RX Queues:
>
Hi Kunal,
No problem. Actually, another thing to consider might be mtu. If the interfaces
are configured with mtu > 1.5kB and the network accepts jumbo frames, maybe try
tcp {mtu }
Regards,
Florin
> On Mar 29, 2022, at 12:24 PM, Kunal Parikh wrote:
>
> Many thanks for looking into this Fl
Many thanks for looking into this Florin.
I'll investigate DPDK PMD tests to see if checksum offloading can be enabled
outside of VPP.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21139): https://lists.fd.io/g/vpp-dev/message/21139
Mute This Top
Yup, similar symptoms.
So beyond trying to figure out why checksum offloading is not working and
trying to combine that with gso, i.e., tcp { tso } in startup.conf, not sure
what else could be done.
If you decide to try debugging checksum offloading, try adding
enable-tcp-udp-checksum to dpd
Diagnostics produced using -b 5g
>
> taskset --cpu-list 10-15 iperf3 -4 -c 10.21.120.133 -b 5g -t 30
root@ip-10-21-120-238:~# vppctl clear errors; vppctl clear run
root@ip-10-21-120-238:~# vppctl show run
Thread 0 vpp_main (lcore 1)
Time 6.1, 10 sec internal node vector rate 0.00 loops/sec 171189
Actually this time the client worker loops/s has dropped to 7k. So that worker
seems to be struggling, probably because of the interface tx cost.
Not sure how that could be solved as it looks like an ena + dpdk tx issue. Out
of curiosity, if you try to limit iperf client bw by doing something "
Attaching diagnostics.
root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
Thread 1: no sessions
root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
[1:0][T] 10.21.120.187:46669->10.21.120.133:5201ESTABLISHED
index: 0 cfg: No csum offlo
:(
Same outcome.
Added tcp { no-csum-offload } to /etc/vpp/startup.conf
Tested with and without tx-checksum-offload.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21133): https://lists.fd.io/g/vpp-dev/message/21133
Mute This Topic: https://lists.f
Hi Kunal,
I remember Shankar needed tcp { no-csum-offload } in startup.conf but I see you
disabled tx-checksum-offload for dpdk. So could you try disabling it from tcp?
The fact that csum offloading is not working is probably going to somewhat
affect throughput but I wouldn’t expect it to be t
Hi Florian,
Confirming that rx/tx descriptors are set to 256.
However, bitrate is still at 3.78 Gbits/sec with VPP vs 11.9 Gbits/sec without
VPP
>
> Beyond that, the only thing I’m noticing is that the client is very
> bursty, i.e., sends up to 42 packets / dispatch but the receiver only gets
Hi Kunal,
First of all, that’s a lot of workers. For this test, could you just reduce the
number to 1? All of them, including worker 0 are spinning empty on both server
and client, i.e., loop/s > 1M.
Beyond that, the only thing I’m noticing is that the client is very bursty,
i.e., sends up t
Thanks Florin.
I've attached output from the console of the iperf3 server and client.
I don't know what I should be looking for.
Can you please provide some pointers?
Many thanks,
Kunal
root@ip-10-21-120-175:~# vppctl show session verbose 2
[0:0][CT:T] 0.0.0.0:5201->0.0.0.0:0
Hi Kunal,
Unfortunately, the screenshots are unreadable for me.
But if the throughput did not improve, maybe try:
clear run
show run
And check loop/s and vector/dispatch. And a
show session verbose 2
And let’s see what the connection reports in terms of errors, cwnd and so on.
Regards,
Also, I do believe that write combining is enabled based on:
$ lspci -v -s 00:06.0
00:06.0 Ethernet controller: Amazon.com, Inc. Elastic Network Adapter (ENA)
Physical Slot: 6
Flags: bus master, fast devsel, latency 0
Memory at febf8000 (32-bit, non-prefetchable) [size=16K]
Memory at fe90 (32-
Thank you for your prompt responses Florin.
I'm taking over from Shankar here.
I re-built the environment with v22.02
Here is the output from show error:
It seems okay to me.
I'm running vpp and iperf3 on the same numa node (but separate CPUs).
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all m
Hi Shankar,
That’s a pretty old release. Could you try something newer, like 22.02?
Nonetheless, you’ll probably need to try some of those optimizations.
Regards,
Florin
> On Mar 23, 2022, at 11:47 AM, Shankar Raju wrote:
>
> Hi Florin,
> I'm using VPP Version: 20.09-release. These were th
Hi Florin,
I'm using VPP Version: 20.09-release. These were the results I got with the
default config. Let me try some of those optimizations and see if that works.
Thanks
WITH VPP :
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0
seconds, 10 second test, tos 0
[ ID] Int
Hi Shankar,
What is the result and what is the difference? Also, I might’ve missed it but
what was the vpp version in these tests?
Regarding optimizations:
- show hardware: will tell you the numa for your nic (if you have multiple
numas) and the rx/tx descriptor ring sizes. Typically for tcp
Hi Florin,
Disabling checksums worked. Now iperf is able to send and receive traffic. But
the transfer rate and bitrate seems to be smaller when using VPP. Could you
please let me know the right tuning params for getting better performance with
VPP ?
Thanks
-=-=-=-=-=-=-=-=-=-=-=-
Links: You
Hi Shankar,
In startup.conf under tcp stanza, add no-csum-offload.
Regards,
Florin
> On Mar 23, 2022, at 6:59 AM, Shankar Raju wrote:
>
> Hi Florin,
>
> I'm running this experiment on AWS and its using ENA NICs. I ran vppctl show
> error command and I did see errors because of bad checksu
Hi Florin,
I'm running this experiment on AWS and its using ENA NICs. I ran vppctl show
error command and I did see errors because of bad checksums. Is there a way to
turn off tx and rx checksuming through vpp just like we do with ethtool ?
SERVER SIDE:
vppctl show errors
Count
Hi Shankar,
What vpp version is this? For optimizations, could you take a look at a recent
version of [1]?
Having said that, let’s try debugging this in small steps. First, I’d recommend
not exporting LD_PRELOAD instead doing something like:
sudo sh -c “LD_PRELOAD= VCL_CONFIG= iperf3 -4 -s”
s
Hi Guys,
I’m trying to test VPP configuration with iperf3 and I running into the
following issue.
· Iperf3 client is able to make a connection with the server, but the client
receives data for the first 0-1 second and then it does not receive any traffic
from the server. The bitrate is zero af
Hi Guys,
I’m trying to test VPP configuration with iperf3 and I running into the
following issue.
• Iperf3 client is able to make a connection with the server, but the
client receives data for the first 0-1 second and then it does not receive any
traffic from the server. The bitrate is
25 matches
Mail list logo