Hi,

On 08/11/17 12:30, Gof via Openvpn-users wrote:
Really noone had such problems with VPNs over TCP before?

you're using bridging + tap + proto tcp + port sharing on a VPS and are expecting good latency? hmmm.... there are many reasons why that combination will NOT give you any performance.

However, I see an increase in ping time in my setup as well:
- udp
- tun

and during an iperf run on raw hardware I see the ping time go up too:

64 bytes from 10.200.0.1: icmp_seq=7 ttl=64 time=0.601 ms
64 bytes from 10.200.0.1: icmp_seq=8 ttl=64 time=0.567 ms
64 bytes from 10.200.0.1: icmp_seq=9 ttl=64 time=3.01 ms
64 bytes from 10.200.0.1: icmp_seq=10 ttl=64 time=4.42 ms
64 bytes from 10.200.0.1: icmp_seq=11 ttl=64 time=2.13 ms
64 bytes from 10.200.0.1: icmp_seq=12 ttl=64 time=5.48 ms
64 bytes from 10.200.0.1: icmp_seq=13 ttl=64 time=6.30 ms
64 bytes from 10.200.0.1: icmp_seq=14 ttl=64 time=4.68 ms
64 bytes from 10.200.0.1: icmp_seq=15 ttl=64 time=5.81 ms
64 bytes from 10.200.0.1: icmp_seq=16 ttl=64 time=4.00 ms
[...]
64 bytes from 10.200.0.1: icmp_seq=23 ttl=64 time=7.11 ms
64 bytes from 10.200.0.1: icmp_seq=24 ttl=64 time=8.01 ms
64 bytes from 10.200.0.1: icmp_seq=25 ttl=64 time=4.86 ms
64 bytes from 10.200.0.1: icmp_seq=26 ttl=64 time=5.68 ms
64 bytes from 10.200.0.1: icmp_seq=27 ttl=64 time=5.31 ms
64 bytes from 10.200.0.1: icmp_seq=28 ttl=64 time=4.17 ms
64 bytes from 10.200.0.1: icmp_seq=29 ttl=64 time=0.355 ms
64 bytes from 10.200.0.1: icmp_seq=30 ttl=64 time=0.577 ms


Admittedly, not as much as you are seeing but it's definitely there and it is to be expected over a VPN link: during the transfer/throughput test the VPN is encrypting+decrypting like mad, which will affect latency at some point.


HTH,

JJK

On Fri, 27 Oct 2017, Gof via Openvpn-users wrote:

Hi,

I have a problem with OpenVPN and I hope you'll be able to help...

I have two OpenVPN daemons on one Linux machine - one listening on TCP and
one bound to the UDP port. They are using TAP devices that are bridged
together, and TCP additionally shares port with ssh via "port-share".

The problem is with clients connected to the TCP server (I can't switch
them to UDP because of the firewall). During testing, I switched one UDP
client (Linux) to TCP and observed the problem only over TCP.

Ping time between them when connection is idle is about 30 ms both over
the Internet and over VPN and that's okay.

#v+
$ ping -c 5 vps
PING vps (81.4.x.x) 56(84) bytes of data.
64 bytes from vps (81.4.x.x): icmp_seq=1 ttl=53 time=29.9 ms
64 bytes from vps (81.4.x.x): icmp_seq=2 ttl=53 time=31.5 ms
64 bytes from vps (81.4.x.x): icmp_seq=3 ttl=53 time=31.1 ms
64 bytes from vps (81.4.x.x): icmp_seq=4 ttl=53 time=30.5 ms
64 bytes from vps (81.4.x.x): icmp_seq=5 ttl=53 time=31.1 ms

--- vps ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 29.915/30.859/31.529/0.582 ms

$ ping -c 5 vps.v
PING vps.v (172.24.44.18) 56(84) bytes of data.
64 bytes from vps.v (172.24.44.18): icmp_seq=1 ttl=64 time=32.1 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=2 ttl=64 time=30.4 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=3 ttl=64 time=30.8 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=4 ttl=64 time=29.8 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=5 ttl=64 time=30.2 ms

--- vps.v ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 29.845/30.722/32.160/0.816 ms
#v-

When I start a full-speed transfer from server to client, the ping raises
over VPN to about 3500-4000 ms, but over Internet it stays the same. It
makes remote work nearly impossible despite having enough Internet
capacity.

#v+
$ ping -c 5 vps.v
PING vps.v (172.24.44.18) 56(84) bytes of data.
64 bytes from vps.v (172.24.44.18): icmp_seq=1 ttl=64 time=3438 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=2 ttl=64 time=4167 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=3 ttl=64 time=4110 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=4 ttl=64 time=3959 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=5 ttl=64 time=3976 ms

--- vps.v ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4141ms
rtt min/avg/max/mdev = 3438.649/3930.410/4167.983/258.253 ms, pipe 4

$ ping -c 5 vps
PING vps (81.4.x.x) 56(84) bytes of data.
64 bytes from vps (81.4.x.x): icmp_seq=1 ttl=53 time=31.5 ms
64 bytes from vps (81.4.x.x): icmp_seq=2 ttl=53 time=36.7 ms
64 bytes from vps (81.4.x.x): icmp_seq=3 ttl=53 time=33.4 ms
64 bytes from vps (81.4.x.x): icmp_seq=4 ttl=53 time=30.6 ms
64 bytes from vps (81.4.x.x): icmp_seq=5 ttl=53 time=30.7 ms

--- vps ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 30.650/32.641/36.799/2.314 ms
#v-

What can be the cause for this? And how could I remedy? I already tried
adding TCP_NODELAY option to the socket, but it didn't help.

My full configs are below.

#v+ server config (no IP address, because it's set on the bridge)
dev             tap1
port            443
proto           tcp-server
ca              /etc/openvpn/ca.crt
cert            /etc/openvpn/svpst.crt
key             /etc/openvpn/svpst.key
dh              /etc/openvpn/dh2048.pem
crl-verify      /etc/openvpn/crl.pem
mode            server
tls-server
client-to-client
keepalive       10 45
max-clients     64
verb            4
mute            20
persist-key
persist-tun
comp-lzo        no
user            nobody
group           nogroup
# socket-flags  TCP_NODELAY
port-share      127.0.0.1 22
cipher          AES-256-CBC
#v-

#v+ client config
dev             tap0
port            443
proto           tcp-client
ca              /etc/openvpn/ca.crt
cert            /etc/openvpn/pi.crt
key             /etc/openvpn/pi.key
remote          81.4.x.x
ifconfig        172.24.44.20 255.255.255.0
comp-lzo        no
keepalive       10 45
tls-client
persist-key
persist-tun
# socket-flags  TCP_NODELAY
user            nobody
group           nogroup
connect-retry   1
connect-timeout 7
verb            4
mute            60
script-security 2
up              /etc/openvpn/pi.up.sh
down            /etc/openvpn/pi.down.sh
cipher          AES-256-CBC
#v-

Mentioned up and down scripts only set default routing through the VPN
(I'm using two routing tables and fwmark to be able to access all ports on
the server except 443 through the VPN, and only 443 through the default
gateway), but the machine I did testing on doesn't use this form of
routing and the problem is the same, so it can be ruled out...



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users

Reply via email to