Hi Nathan,

Thanks for the  response, I was using *no-multi-seg* parameter which I
think affected the MTU/Jumbo settings of the ENIs, after removing it from
the startup.conf file, the iperf results produced averagely 4.8/4.9Gbps

On Wed, Dec 1, 2021 at 11:00 AM Nathan Skrzypczak <
nathan.skrzypc...@gmail.com> wrote:

> Hi Christopher,
>
> This looks like an MTU issue from afar, the VPP config you have seems
> fine, but it's surprising that your `show hardware-interfaces`
> doesn't show the "carrier up ... mtu 9001" part for
> VirtualFunctionEthernet0/7/0
> The "Link speed: unknown" part should be fine thought.
>
> Maybe you can try running an iperf to another VirtualFunctionEthernet
> seeing if the problem still persist, and/or
> reducing the MTU on the iperf side to something smaller (e.g. 1500).
> You can also try to rebuild vpp on the latest master (or v21.10) to see if
> this fixes the issue.
>
> On the performance side of things, you shouldn't expect this much of an
> improvement
> regarding single flow throughput, as on AWS "Bandwidth for single-flow
> (5-tuple) traffic is limited to
> 5 Gbps, regardless of the destination of the traffic." [0]. But you should
> definitely be able to reach it with VPP.
>
> Hope this helps
> -Nathan
>
> [0]
> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html
>
>
> Le lun. 29 nov. 2021 à 22:23, Christopher Adigun <future...@gmail.com> a
> écrit :
>
>> Hello,
>>
>> I am currently trying to run a UPF application via VPP, I decided to use
>> DPDK and Kubernetes but the speeds I am getting with iperf is not
>> realistic, I am getting 0 Mbps when I use the VPP as a gateway, if I remove
>> the VPP as gateway the throughput speed goes around 4Gbps
>>
>>  iperf3 -c 10.0.7.167 -i 1 -t 10
>> Connecting to host 10.0.7.167, port 5201
>> [  5] local 10.0.4.48 port 40466 connected to 10.0.7.167 port 5201
>> [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
>> [  5]   0.00-1.00   sec   323 KBytes  2.65 Mbits/sec    7   8.74 KBytes
>> [  5]   1.00-2.00   sec  0.00 Bytes  0.00 bits/sec    1   8.74 KBytes
>> [  5]   2.00-3.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
>> [  5]   3.00-4.00   sec  0.00 Bytes  0.00 bits/sec    1   8.74 KBytes
>> [  5]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
>> [  5]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
>> [  5]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec    1   8.74 KBytes
>> [  5]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
>> [  5]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
>> [  5]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [ ID] Interval           Transfer     Bitrate         Retr
>> [  5]   0.00-10.00  sec   323 KBytes   265 Kbits/sec   10
>> sender
>> [  5]   0.00-10.00  sec  0.00 Bytes  0.00 bits/sec
>>  receiver
>>
>> The kubernetes worker node is running on AWS (c5.4xlarge), I used
>> sriov-dp to add the DPDK interfaces into the pods.
>>
>> Below is my startup.conf
>>
>> unix {
>>   nodaemon
>>   log /tmp/vpp.log
>>   full-coredump
>>   gid vpp
>>   interactive
>>   cli-listen /run/vpp/cli.sock
>>   exec /openair-upf/etc/init.conf
>> }
>>
>> api-trace {
>>   on
>> }
>>
>> dpdk {
>>  uio-driver vfio-pci
>>  dev default {
>>    num-rx-queues 1
>>    num-rx-queues 1
>>    }
>>  dev 0000:00:06.0
>>  dev 0000:00:07.0
>>  dev 0000:00:08.0
>> }
>>
>> api-segment {
>>   gid vpp
>> }
>>
>> plugins {
>>     path /usr/lib/x86_64-linux-gnu/vpp_plugins/
>>     plugin dpdk_plugin.so { enable }
>>     plugin gtpu_plugin.so { disable }
>>     plugin upf_plugin.so { enable }
>> }
>>
>>
>> Init conf:
>>
>> ip table add 1
>> ip table add 2
>>
>> set interface ip table VirtualFunctionEthernet0/6/0 1
>> set interface mtu 9001 VirtualFunctionEthernet0/6/0
>> set interface ip address VirtualFunctionEthernet0/6/0 10.0.4.11/24
>> set interface state VirtualFunctionEthernet0/6/0 up
>>
>> set interface ip table VirtualFunctionEthernet0/7/0 0
>> set interface mtu 9001 VirtualFunctionEthernet0/7/0
>> set interface ip address VirtualFunctionEthernet0/7/0 10.0.6.11/24
>> set interface state VirtualFunctionEthernet0/7/0 up
>>
>> set interface ip table VirtualFunctionEthernet0/8/0 2
>> set interface mtu 9001 VirtualFunctionEthernet0/8/0
>> set interface ip address VirtualFunctionEthernet0/8/0 10.0.7.11/24
>> set interface state VirtualFunctionEthernet0/8/0 up
>>
>> ip route add 0.0.0.0/0 table 2 via 10.0.7.167
>> VirtualFunctionEthernet0/8/0
>>
>> trace add dpdk-input 100
>>
>>
>> First of all I noticed the following errors in the container logs:
>>
>> interface        [error ]: hw_add_del_mac_address:
>> dpdk_add_del_mac_address: mac address add/del failed: -95
>> interface        [error ]: hw_add_del_mac_address:
>> dpdk_add_del_mac_address: mac address add/del failed: -95
>> interface        [error ]: hw_add_del_mac_address:
>> dpdk_add_del_mac_address: mac address add/del failed: -95
>> interface        [error ]: hw_add_del_mac_address:
>> dpdk_add_del_mac_address: mac address add/del failed: -95
>> interface        [error ]: hw_add_del_mac_address:
>> dpdk_add_del_mac_address: mac address add/del failed: -95
>> interface        [error ]: hw_add_del_mac_address:
>> dpdk_add_del_mac_address: mac address add/del failed: -95
>>
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>> format_dpdk_device:590: rte_eth_dev_rss_hash_conf_get returned -95
>>
>>
>> I am a bit confused about the errors because I am able to see the correct
>> MAC address for all the DPDK interfaces in vpp console, is it safe not to
>> worry about them?
>>
>> vpp# show hardware-interfaces
>>               Name                Idx   Link  Hardware
>> VirtualFunctionEthernet0/6/0       1     up   VirtualFunctionEthernet0/6/0
>>   Link speed: unknown
>>   *Ethernet address 02:3d:a7:51:90:bc*
>>   AWS ENA VF
>>     carrier up full duplex mtu 9001
>>     flags: admin-up pmd rx-ip4-cksum
>>
>> VirtualFunctionEthernet0/7/0       2     up   VirtualFunctionEthernet0/7/0
>>   Link speed: unknown
>>  * Ethernet address 02:87:8d:0f:e2:20*
>>   AWS ENA VF
>>
>> VirtualFunctionEthernet0/8/0       3     up   VirtualFunctionEthernet0/8/0
>>   Link speed: unknown
>> *  Ethernet address 02:20:55:04:c1:76*
>>   AWS ENA VF
>>     carrier up full duplex mtu 9001
>>     flags: admin-up pmd rx-ip4-cksum
>>
>>
>> [ec2-user@ip-10-0-0-53 ~]$ kubectl -n oai exec -ti
>> oai-vpp-upf-57d4fbdcb5-xnslq bin/vppctl show version
>> kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a
>> future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
>> vpp v21.01.1-release built by root on 4e3c1ed23a8e at 2021-11-25T16:47:55
>>
>> [image: image.png]
>>
>> Ping tests are working normally on the VPP..
>>
>> Also will the *Link speed: unknown* have affect it also?
>>
>> Any idea what could be causing the low iperf speed?
>>
>> 
>>
>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20575): https://lists.fd.io/g/vpp-dev/message/20575
Mute This Topic: https://lists.fd.io/mt/87388694/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to