On 09/10/2018 07:44 AM, Paolo Abeni wrote:
> hi all,
> 
> while testing some local patches I observed that the TCP tput in the
> following scenario:
> 
> # the following enable napi on veth0, so that we can trigger the
> # GRO path with namespaces
> ip netns add test
> ip link add type veth
> ip link set dev veth0 netns test
> ip -n test link set lo up
> ip -n test link set veth0 up
> ip -n test addr add dev veth0 172.16.1.2/24
> ip link set dev veth1 up
> ip addr add dev veth1 172.16.1.1/24
> IDX=`ip netns exec test cat /sys/class/net/veth0/ifindex`
> 
> # 'xdp_pass' is a NO-OP XDP program that simply return XDP_PASS
> ip netns exec test ./xdp_pass $IDX &
> taskset 0x2 ip netns exec test iperf3 -s -i 60 &
> taskset 0x1 iperf3 -c 172.16.1.2 -t 60 -i 60
> 
> is quite lower than expected (~800Mbps). 'perf' shows a weird topmost 
> offender:
>


But... why GRO would even be needed in this scenario ?

GRO is really meant for physical devices, having to mess with skb->sk adds 
extra cost
in this already heavy cost engine.

Virtual devices should already be fed with TSO packets.




Reply via email to