On 2019/4/10 下午9:01, David Woodhouse wrote:
On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote:
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff
*skb, struct net_device *dev)
if (tfile->flags & TUN_FASYNC)
kill_fasync(&tfile->fasync, SIGIO, POLL_IN);
tfile->socket.sk->sk_data_ready(tfile->socket.sk);
+ if (!ptr_ring_empty(&tfile->tx_ring))
+ netif_stop_queue(tun->dev);
rcu_read_unlock();
return NETDEV_TX_OK;
Hm, that should be using ptr_ring_full() shouldn't it? So...
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1121,6 +1121,9 @@ static netdev_tx_t tun_net_xmit(struct s
if (ptr_ring_produce(&tfile->tx_ring, skb))
goto drop;
+ if (ptr_ring_full(&tfile->tx_ring))
+ netif_stop_queue(tun->dev);
+
/* Notify and wake up reader process */
if (tfile->flags & TUN_FASYNC)
kill_fasync(&tfile->fasync, SIGIO, POLL_IN);
@@ -2229,6 +2232,7 @@ static ssize_t tun_do_read(struct tun_st
consume_skb(skb);
}
+ netif_wake_queue(tun->dev);
return ret;
}
That doesn't seem to make much difference at all; it's still dropping a
lot of packets because ptr_ring_produce() is returning non-zero.
I think you need try to stop the queue just in this case? Ideally we may
want to stop the queue when the queue is about to full, but we don't
have such helper currently.
Thanks
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
212992 1400 10.00 7747169 0 8676.81
212992 10.00 1471769 1648.38
Making it call netif_stop_queue() when ptr_ring_produce() fails causes
it to perform even worse...
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
212992 1400 10.00 1906235 0 2134.98
212992 10.00 985428 1103.68
At this point I'm mostly just confused.