Re: Tun congestion/BQL

2019-04-11 Thread David Woodhouse
On Fri, 2019-04-12 at 12:26 +0800, Jason Wang wrote: > Yes, you can refer: > > 1) Qemu hw/virtio/vhost.c or hw/net/vhost_net.c > > 2) dpdk drivers/net/virtio/virtio_user/vhost_kernel_tap.c > > DPDK code seems more compact. > > Basically, the setup of TUN/TAP should be the same, then userspace n

Re: Tun congestion/BQL

2019-04-11 Thread Jason Wang
On 2019/4/11 下午5:25, David Woodhouse wrote: On Thu, 2019-04-11 at 15:22 +0800, Jason Wang wrote: If you care about userspace performance, you may want to try vhost + TAP instead. I admit the API is not user friendly which needs to be improved but then there will be no syscall overhead on packe

Re: Tun congestion/BQL

2019-04-11 Thread Jason Wang
On 2019/4/11 下午5:16, David Woodhouse wrote: On Thu, 2019-04-11 at 17:04 +0800, Jason Wang wrote: Btw, forget to mention, I modify your patch to use netif_stop/wake_subqueue() instead. Hm... --- /usr/src/debug/kernel-5.0.fc29/linux-5.0.5- 200.fc29.x86_64/drivers/net/tun.c2019-03-03 23:21:29.

Re: Tun congestion/BQL

2019-04-11 Thread David Woodhouse
On Thu, 2019-04-11 at 15:22 +0800, Jason Wang wrote: > If you care about userspace performance, you may want to try vhost + TAP > instead. I admit the API is not user friendly which needs to be improved > but then there will be no syscall overhead on packet transmission and > receiving, and even

Re: Tun congestion/BQL

2019-04-11 Thread David Woodhouse
On Thu, 2019-04-11 at 17:04 +0800, Jason Wang wrote: > Btw, forget to mention, I modify your patch to use > netif_stop/wake_subqueue() instead. Hm... --- /usr/src/debug/kernel-5.0.fc29/linux-5.0.5- 200.fc29.x86_64/drivers/net/tun.c2019-03-03 23:21:29.0 + +++ /home/fedora/tun/tun.c

Re: Tun congestion/BQL

2019-04-11 Thread Jason Wang
On 2019/4/11 下午4:56, David Woodhouse wrote: On Thu, 2019-04-11 at 15:17 +0800, Jason Wang wrote: Ideally we want to react when the queue starts building rather than when it starts getting full; by pushing back on upper layers (or, if forwarding, dropping packets to signal congestion). This is

Re: Tun congestion/BQL

2019-04-11 Thread David Woodhouse
On Thu, 2019-04-11 at 15:17 +0800, Jason Wang wrote: > > > Ideally we want to react when the queue starts building rather than when > > > it starts getting full; by pushing back on upper layers (or, if > > > forwarding, dropping packets to signal congestion). > > > > This is precisely what my firs

Re: Tun congestion/BQL

2019-04-11 Thread Jason Wang
On 2019/4/10 下午11:32, David Woodhouse wrote: On Wed, 2019-04-10 at 17:01 +0200, Toke Høiland-Jørgensen wrote: David Woodhouse writes: On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote: That doesn't seem to make much difference at all; it's still dropping a lot of packets beca

Re: Tun congestion/BQL

2019-04-11 Thread Jason Wang
On 2019/4/10 下午10:33, David Woodhouse wrote: On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote: That doesn't seem to make much difference at all; it's still dropping a lot of packets because ptr_ring_produce() is returning non-zero. I think you need try to stop the queue just i

Re: Tun congestion/BQL

2019-04-11 Thread Jason Wang
On 2019/4/10 下午9:42, Toke Høiland-Jørgensen wrote: Jason Wang writes: On 2019/4/10 下午9:01, David Woodhouse wrote: On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote: --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *

Re: Tun congestion/BQL

2019-04-10 Thread David Woodhouse
On Wed, 2019-04-10 at 17:01 +0200, Toke Høiland-Jørgensen wrote: > David Woodhouse writes: > > > On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote: > > > > > That doesn't seem to make much difference at all; it's still dropping > > > > > a > > > > > lot of packets because ptr_ring_

Re: Tun congestion/BQL

2019-04-10 Thread Toke Høiland-Jørgensen
David Woodhouse writes: > On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote: >> > > That doesn't seem to make much difference at all; it's still dropping a >> > > lot of packets because ptr_ring_produce() is returning non-zero. >> > >> > >> > I think you need try to stop the queue

Re: Tun congestion/BQL

2019-04-10 Thread David Woodhouse
On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote: > > > That doesn't seem to make much difference at all; it's still dropping a > > > lot of packets because ptr_ring_produce() is returning non-zero. > > > > > > I think you need try to stop the queue just in this case? Ideally we ma

Re: Tun congestion/BQL

2019-04-10 Thread Toke Høiland-Jørgensen
Jason Wang writes: > On 2019/4/10 下午9:01, David Woodhouse wrote: >> On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote: >>> --- a/drivers/net/tun.c >>> +++ b/drivers/net/tun.c >>> @@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff >>> *skb, struct net_device *dev) >>>

Re: Tun congestion/BQL

2019-04-10 Thread Jason Wang
On 2019/4/10 下午9:01, David Woodhouse wrote: On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote: --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) if (tfile->flags & TUN_FASYNC)

Re: Tun congestion/BQL

2019-04-10 Thread David Woodhouse
On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote: > --- a/drivers/net/tun.c > +++ b/drivers/net/tun.c > @@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff > *skb, struct net_device *dev) > if (tfile->flags & TUN_FASYNC) > kill_fasync(&tfile->fasync,

Tun congestion/BQL

2019-04-10 Thread David Woodhouse
I've been working on OpenConnect VPN performance. After fixing some local stupidities I am basically crypto-bound as I suck packets out of the tun device and feed them out over the public network as fast as the crypto library can encrypt them. However, the tun device is dropping packets. I'm test