On Fri, 2019-04-12 at 12:26 +0800, Jason Wang wrote:
> Yes, you can refer:
>
> 1) Qemu hw/virtio/vhost.c or hw/net/vhost_net.c
>
> 2) dpdk drivers/net/virtio/virtio_user/vhost_kernel_tap.c
>
> DPDK code seems more compact.
>
> Basically, the setup of TUN/TAP should be the same, then userspace n
On 2019/4/11 下午5:25, David Woodhouse wrote:
On Thu, 2019-04-11 at 15:22 +0800, Jason Wang wrote:
If you care about userspace performance, you may want to try vhost + TAP
instead. I admit the API is not user friendly which needs to be improved
but then there will be no syscall overhead on packe
On 2019/4/11 下午5:16, David Woodhouse wrote:
On Thu, 2019-04-11 at 17:04 +0800, Jason Wang wrote:
Btw, forget to mention, I modify your patch to use
netif_stop/wake_subqueue() instead.
Hm...
--- /usr/src/debug/kernel-5.0.fc29/linux-5.0.5-
200.fc29.x86_64/drivers/net/tun.c2019-03-03 23:21:29.
On Thu, 2019-04-11 at 15:22 +0800, Jason Wang wrote:
> If you care about userspace performance, you may want to try vhost + TAP
> instead. I admit the API is not user friendly which needs to be improved
> but then there will be no syscall overhead on packet transmission and
> receiving, and even
On Thu, 2019-04-11 at 17:04 +0800, Jason Wang wrote:
> Btw, forget to mention, I modify your patch to use
> netif_stop/wake_subqueue() instead.
Hm...
--- /usr/src/debug/kernel-5.0.fc29/linux-5.0.5-
200.fc29.x86_64/drivers/net/tun.c2019-03-03 23:21:29.0 +
+++ /home/fedora/tun/tun.c
On 2019/4/11 下午4:56, David Woodhouse wrote:
On Thu, 2019-04-11 at 15:17 +0800, Jason Wang wrote:
Ideally we want to react when the queue starts building rather than when
it starts getting full; by pushing back on upper layers (or, if
forwarding, dropping packets to signal congestion).
This is
On Thu, 2019-04-11 at 15:17 +0800, Jason Wang wrote:
> > > Ideally we want to react when the queue starts building rather than when
> > > it starts getting full; by pushing back on upper layers (or, if
> > > forwarding, dropping packets to signal congestion).
> >
> > This is precisely what my firs
On 2019/4/10 下午11:32, David Woodhouse wrote:
On Wed, 2019-04-10 at 17:01 +0200, Toke Høiland-Jørgensen wrote:
David Woodhouse writes:
On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
That doesn't seem to make much difference at all; it's still dropping a
lot of packets beca
On 2019/4/10 下午10:33, David Woodhouse wrote:
On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
That doesn't seem to make much difference at all; it's still dropping a
lot of packets because ptr_ring_produce() is returning non-zero.
I think you need try to stop the queue just i
On 2019/4/10 下午9:42, Toke Høiland-Jørgensen wrote:
Jason Wang writes:
On 2019/4/10 下午9:01, David Woodhouse wrote:
On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote:
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff
*
On Wed, 2019-04-10 at 17:01 +0200, Toke Høiland-Jørgensen wrote:
> David Woodhouse writes:
>
> > On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
> > > > > That doesn't seem to make much difference at all; it's still dropping
> > > > > a
> > > > > lot of packets because ptr_ring_
David Woodhouse writes:
> On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
>> > > That doesn't seem to make much difference at all; it's still dropping a
>> > > lot of packets because ptr_ring_produce() is returning non-zero.
>> >
>> >
>> > I think you need try to stop the queue
On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
> > > That doesn't seem to make much difference at all; it's still dropping a
> > > lot of packets because ptr_ring_produce() is returning non-zero.
> >
> >
> > I think you need try to stop the queue just in this case? Ideally we ma
Jason Wang writes:
> On 2019/4/10 下午9:01, David Woodhouse wrote:
>> On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote:
>>> --- a/drivers/net/tun.c
>>> +++ b/drivers/net/tun.c
>>> @@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff
>>> *skb, struct net_device *dev)
>>>
On 2019/4/10 下午9:01, David Woodhouse wrote:
On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote:
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff
*skb, struct net_device *dev)
if (tfile->flags & TUN_FASYNC)
On Wed, 2019-04-10 at 15:01 +0300, David Woodhouse wrote:
> --- a/drivers/net/tun.c
> +++ b/drivers/net/tun.c
> @@ -1125,7 +1128,9 @@ static netdev_tx_t tun_net_xmit(struct sk_buff
> *skb, struct net_device *dev)
> if (tfile->flags & TUN_FASYNC)
> kill_fasync(&tfile->fasync,
I've been working on OpenConnect VPN performance. After fixing some
local stupidities I am basically crypto-bound as I suck packets out of
the tun device and feed them out over the public network as fast as the
crypto library can encrypt them.
However, the tun device is dropping packets.
I'm test
17 matches
Mail list logo