Hi,
I rebuilt VPP on master and updated startup.conf to enable tso as follows:
dpdk {
dev 0000:00:03.0{
num-rx-desc 2048
num-tx-desc 2048
tso on
}
uio-driver vfio-pci
enable-tcp-udp-checksum
}

I'm not sure whether it is working or not, there is nothing in show session 
verbose 2 to indicate whether it is on or off (output at the end of this 
update). Unfortunately there was no improvement from a performance perspective.

Then I figured I would try using a tap interface on the VPP side so I could run 
iperf3 "natively" on the VPP client side as well, but got the same result 
again. I find this so perplexing, two test runs back to back with reboots in 
between to rule out any configuration issues:

*Test 1 using native linux networking on both sides:*
[iperf3 client --> linux networking eth0] --> [Openstack/Linuxbridge] --> 
[linux networking eth0 --> iperf3 server]
Result: 10+ Gbps

*Reboot both instances and assign the NIC on the client side to VPP :*

vpp# set int l2 bridge GigabitEthernet0/3/0 1

vpp# set int state GigabitEthernet0/3/0 up

vpp# create tap

tap0

vpp# set int l2 bridge tap0 1

vpp# set int state tap0 up

[root]# ip addr add 10.0.0.152/24 dev tap0

[iperf3 client --> tap0 --> VPP GigabitEthernet0/3/0 ] --> 
[Openstack/Linuxbridge] --> [ linux networking eth0 --> iperf3 server]
Result: 1 Gbps

I had started to suspect the host OS or OpenStack Neutron, linuxbridge etc, but 
based on this it just *has* to be something in the guest running VPP. Any and 
all ideas or suggestions are welcome!

Regards,
Dom

Note: this output is from a run using iperf3+VCL with the TSO settings in 
startup.conf, not the tap interface test described above:

vpp# set interface ip address GigabitEthernet0/3/0 10.0.0.152/24
vpp# set interface state GigabitEthernet0/3/0 up
vpp# session enable
vpp# sh session verbose 2
Thread 0: no sessions
[1:0][T] 10.0.0.152:6445->10.0.0.156:5201         ESTABLISHED
index: 0 cfg:  flags:  timers:
snd_una 124 snd_nxt 124 snd_una_max 124 rcv_nxt 5 rcv_las 5
snd_wnd 29056 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 4 snd_wl2 124
flight size 0 out space 4473 rcv_wnd_av 7999488 tsval_recent 3428491
tsecr 193532193 tsecr_last_ack 193532193 tsval_recent_age 13996 snd_mss 1448
rto 259 rto_boff 0 srtt 67 us 3.891 rttvar 48 rtt_ts 0.0000 rtt_seq 124
next_node 0 opaque 0x0
cong:   none algo cubic cwnd 4473 ssthresh 2147483647 bytes_acked 0
cc space 4473 prev_cwnd 0 prev_ssthresh 0
snd_cong 1281277517 dupack 0 limited_tx 1281277517
rxt_bytes 0 rxt_delivered 0 rxt_head 1281277517 rxt_ts 193546719
prr_start 1281277517 prr_delivered 0 prr space 0
sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
last_delivered 0 high_sacked 1281277517 is_reneging 0
cur_rxt_hole 4294967295 high_rxt 1281277517 rescue_rxt 1281277517
stats: in segs 6 dsegs 4 bytes 4 dupacks 0
out segs 7 dsegs 2 bytes 123 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 14.539
err wnd data below 0 above 0 ack below 0 above 0
pacer: rate 1149550 bucket 0 t/p 1.149 last_update 14.526 s idle 194
Rx fifo: cursize 0 nitems 7999999 has_event 0
head 4 tail 4 segment manager 2
vpp session 0 thread 1 app session 0 thread 0
ooo pool 0 active elts newest 4294967295
Tx fifo: cursize 0 nitems 7999999 has_event 0
head 123 tail 123 segment manager 2
vpp session 0 thread 1 app session 0 thread 0
ooo pool 0 active elts newest 4294967295
session: state: ready opaque: 0x0 flags:
[1:1][T] 10.0.0.152:10408->10.0.0.156:5201        ESTABLISHED
index: 1 cfg:  flags:  timers: RETRANSMIT
snd_una 2195902174 snd_nxt 2196262726 snd_una_max 2196262726 rcv_nxt 1 rcv_las 1
snd_wnd 1574016 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 1 snd_wl2 2195902174
flight size 360552 out space 832 rcv_wnd_av 7999488 tsval_recent 3443014
tsecr 193546715 tsecr_last_ack 193546715 tsval_recent_age 4294966768 snd_mss 
1448
rto 200 rto_boff 0 srtt 1 us 2.606 rttvar 1 rtt_ts 45.0534 rtt_seq 2195903622
next_node 0 opaque 0x0
cong:   none algo cubic cwnd 361384 ssthresh 329528 bytes_acked 2896
cc space 832 prev_cwnd 470755 prev_ssthresh 340435
snd_cong 2188350854 dupack 0 limited_tx 2709798285
rxt_bytes 0 rxt_delivered 0 rxt_head 2143051622 rxt_ts 193546719
prr_start 2187975822 prr_delivered 0 prr space 0
sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
last_delivered 0 high_sacked 2188350854 is_reneging 0
cur_rxt_hole 4294967295 high_rxt 2187977270 rescue_rxt 2187975821
stats: in segs 720132 dsegs 0 bytes 0 dupacks 127869
out segs 1549120 dsegs 1549119 bytes 2243122901 dupacks 0
fr 43 tr 0 rxt segs 32362 bytes 46860176 duration 14.529
err wnd data below 0 above 0 ack below 0 above 0
pacer: rate 361384000 bucket 1996 t/p 361.384 last_update 619 us idle 100
Rx fifo: cursize 0 nitems 7999999 has_event 0
head 0 tail 0 segment manager 2
vpp session 1 thread 1 app session 1 thread 0
ooo pool 0 active elts newest 0
Tx fifo: cursize 7999999 nitems 7999999 has_event 1
head 3902173 tail 3902172 segment manager 2
vpp session 1 thread 1 app session 1 thread 0
ooo pool 0 active elts newest 4294967295
session: state: ready opaque: 0x0 flags:
Thread 1: active sessions 2
Thread 2: no sessions
Thread 3: no sessions
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14891): https://lists.fd.io/g/vpp-dev/message/14891
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to