Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Vitalij Satanivskij



Hello. 

Is there any changes about this problem


I'm using FreeBSD 12 on my desktop and can confirm problem occur with some 
hosts.



Michael Tuexen wrote:
MT> 
MT> 
MT> > On 9. Jul 2019, at 14:58, Paul  wrote:
MT> > 
MT> > Hi Michael,
MT> > 
MT> > 9 July 2019, 15:34:29, by "Michael Tuexen" :
MT> > 
MT> >> 
MT> >> 
MT> >>> On 8. Jul 2019, at 17:22, Paul  wrote:
MT> >>> 
MT> >>> 
MT> >>> 
MT> >>> 8 July 2019, 17:12:21, by "Michael Tuexen" :
MT> >>> 
MT> > On 8. Jul 2019, at 15:24, Paul  wrote:
MT> > 
MT> > Hi Michael,
MT> > 
MT> > 8 July 2019, 15:53:15, by "Michael Tuexen" :
MT> > 
MT> >>> On 8. Jul 2019, at 12:37, Paul  wrote:
MT> >>> 
MT> >>> Hi team,
MT> >>> 
MT> >>> Recently we had an upgrade to 12 Stable. Immediately after, we have 
started 
MT> >>> seeing some strange connection establishment timeouts to some fixed 
number
MT> >>> of external (world) hosts. The issue was persistent and easy to 
reproduce.
MT> >>> Thanks to a patience and dedication of our system engineer we have 
tracked  
MT> >>> this issue down to a specific commit:
MT> >>> 
MT> >>> https://svnweb.freebsd.org/base?view=revision&revision=338053
MT> >>> 
MT> >>> This patch was also back-ported into 11 Stable:
MT> >>> 
MT> >>> https://svnweb.freebsd.org/base?view=revision&revision=348435
MT> >>> 
MT> >>> Among other things this patch changes the timestamp allocation 
strategy,
MT> >>> by introducing a deterministic randomness via a hash function that 
takes
MT> >>> into account a random key as well as source address, source port, 
dest
MT> >>> address and dest port. As the result, timestamp offsets of different
MT> >>> tuples (SA,SP,DA,DP) will be wildly different and will jump from 
small 
MT> >>> to large numbers and back, as long as something in the tuple 
changes.
MT> >> Hi Paul,
MT> >> 
MT> >> this is correct.
MT> >> 
MT> >> Please note that the same happens with the old method, if two hosts 
with
MT> >> different uptimes are bind a consumer grade NAT.
MT> > 
MT> > If NAT does not replace timestamps then yes, it should be the case.
MT> > 
MT> >>> 
MT> >>> After performing various tests of hosts that produce the above 
mentioned 
MT> >>> issue we came to conclusion that there are some interesting 
implementations 
MT> >>> that drop SYN packets with timestamps smaller  than the largest 
timestamp 
MT> >>> value from streams of all recent or current connections from a 
specific 
MT> >>> address. This looks as some kind of SYN flood protection.
MT> >> This also breaks multiple hosts with different uptimes behind a 
consumer
MT> >> level NAT talking to such a server.
MT> >>> 
MT> >>> To ensure that each external host is not going to see a wild jumps 
of 
MT> >>> timestamp values I propose a patch that removes ports from the 
equation
MT> >>> all together, when calculating the timestamp offset:
MT> >>> 
MT> >>> Index: sys/netinet/tcp_subr.c
MT> >>> ===
MT> >>> --- sys/netinet/tcp_subr.c  (revision 348435)
MT> >>> +++ sys/netinet/tcp_subr.c  (working copy)
MT> >>> @@ -2224,7 +2224,22 @@
MT> >>> uint32_t
MT> >>> tcp_new_ts_offset(struct in_conninfo *inc)
MT> >>> {
MT> >>> -   return (tcp_keyed_hash(inc, V_ts_offset_secret));
MT> >>> +/* 
MT> >>> + * Some implementations show a strange behaviour when a 
wildly random 
MT> >>> + * timestamps allocated for different streams. It seems 
that only the
MT> >>> + * SYN packets are affected. Observed implementations drop 
SYN packets
MT> >>> + * with timestamps smaller than the largest timestamp 
value of all 
MT> >>> + * recent or current connections from specific a address. 
To mitigate 
MT> >>> + * this we are going to ensure that each host will always 
observe 
MT> >>> + * timestamps as increasing no matter the stream: by 
dropping ports
MT> >>> + * from the equation.
MT> >>> + */ 
MT> >>> +struct in_conninfo inc_copy = *inc;
MT> >>> +
MT> >>> +inc_copy.inc_fport = 0;
MT> >>> +inc_copy.inc_lport = 0;
MT> >>> +
MT> >>> +   return (tcp_keyed_hash(&inc_copy, V_ts_offset_secret));
MT> >>> }
MT> >>> 
MT> >>> /*
MT> >>> 
MT> >>> In any case, the solution of the uptime leak, implemented in 
rev338053 is 
MT> >>> not going to suffer, because a supposed attacker is currently able 
to use 
MT> >>> any fixed values of SP and DP, albeit not 0, anyway, to remove them 
out 
MT> >>> of the equation.
MT> >> Can you describe how a peer can compute the uptime from two observed 
timestamps?
MT> >> I don't see how you can do that...
MT> > 
MT> > Supposed attacker could run a script that continuously m

Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Michael Tuexen
> On 17. Jul 2019, at 09:42, Vitalij Satanivskij  wrote:
> 
> 
> 
> Hello. 
> 
> Is there any changes about this problem
> 
> 
> I'm using FreeBSD 12 on my desktop and can confirm problem occur with some 
> hosts.
Can you provide a list of some of these hosts?
I'll put up a change for review later today.

In the meantime you can deal with the buggy hosts by disabling the timestamps
or dropping extensions on SYN retransmits.

Best regards
Michael
> 
> 
> 
> Michael Tuexen wrote:
> MT> 
> MT> 
> MT> > On 9. Jul 2019, at 14:58, Paul  wrote:
> MT> > 
> MT> > Hi Michael,
> MT> > 
> MT> > 9 July 2019, 15:34:29, by "Michael Tuexen" :
> MT> > 
> MT> >> 
> MT> >> 
> MT> >>> On 8. Jul 2019, at 17:22, Paul  wrote:
> MT> >>> 
> MT> >>> 
> MT> >>> 
> MT> >>> 8 July 2019, 17:12:21, by "Michael Tuexen" :
> MT> >>> 
> MT> > On 8. Jul 2019, at 15:24, Paul  wrote:
> MT> > 
> MT> > Hi Michael,
> MT> > 
> MT> > 8 July 2019, 15:53:15, by "Michael Tuexen" :
> MT> > 
> MT> >>> On 8. Jul 2019, at 12:37, Paul  wrote:
> MT> >>> 
> MT> >>> Hi team,
> MT> >>> 
> MT> >>> Recently we had an upgrade to 12 Stable. Immediately after, we 
> have started 
> MT> >>> seeing some strange connection establishment timeouts to some 
> fixed number
> MT> >>> of external (world) hosts. The issue was persistent and easy to 
> reproduce.
> MT> >>> Thanks to a patience and dedication of our system engineer we 
> have tracked  
> MT> >>> this issue down to a specific commit:
> MT> >>> 
> MT> >>> https://svnweb.freebsd.org/base?view=revision&revision=338053
> MT> >>> 
> MT> >>> This patch was also back-ported into 11 Stable:
> MT> >>> 
> MT> >>> https://svnweb.freebsd.org/base?view=revision&revision=348435
> MT> >>> 
> MT> >>> Among other things this patch changes the timestamp allocation 
> strategy,
> MT> >>> by introducing a deterministic randomness via a hash function 
> that takes
> MT> >>> into account a random key as well as source address, source port, 
> dest
> MT> >>> address and dest port. As the result, timestamp offsets of 
> different
> MT> >>> tuples (SA,SP,DA,DP) will be wildly different and will jump from 
> small 
> MT> >>> to large numbers and back, as long as something in the tuple 
> changes.
> MT> >> Hi Paul,
> MT> >> 
> MT> >> this is correct.
> MT> >> 
> MT> >> Please note that the same happens with the old method, if two 
> hosts with
> MT> >> different uptimes are bind a consumer grade NAT.
> MT> > 
> MT> > If NAT does not replace timestamps then yes, it should be the case.
> MT> > 
> MT> >>> 
> MT> >>> After performing various tests of hosts that produce the above 
> mentioned 
> MT> >>> issue we came to conclusion that there are some interesting 
> implementations 
> MT> >>> that drop SYN packets with timestamps smaller  than the largest 
> timestamp 
> MT> >>> value from streams of all recent or current connections from a 
> specific 
> MT> >>> address. This looks as some kind of SYN flood protection.
> MT> >> This also breaks multiple hosts with different uptimes behind a 
> consumer
> MT> >> level NAT talking to such a server.
> MT> >>> 
> MT> >>> To ensure that each external host is not going to see a wild 
> jumps of 
> MT> >>> timestamp values I propose a patch that removes ports from the 
> equation
> MT> >>> all together, when calculating the timestamp offset:
> MT> >>> 
> MT> >>> Index: sys/netinet/tcp_subr.c
> MT> >>> 
> ===
> MT> >>> --- sys/netinet/tcp_subr.c(revision 348435)
> MT> >>> +++ sys/netinet/tcp_subr.c(working copy)
> MT> >>> @@ -2224,7 +2224,22 @@
> MT> >>> uint32_t
> MT> >>> tcp_new_ts_offset(struct in_conninfo *inc)
> MT> >>> {
> MT> >>> - return (tcp_keyed_hash(inc, V_ts_offset_secret));
> MT> >>> +/* 
> MT> >>> + * Some implementations show a strange behaviour when a 
> wildly random 
> MT> >>> + * timestamps allocated for different streams. It seems 
> that only the
> MT> >>> + * SYN packets are affected. Observed implementations 
> drop SYN packets
> MT> >>> + * with timestamps smaller than the largest timestamp 
> value of all 
> MT> >>> + * recent or current connections from specific a 
> address. To mitigate 
> MT> >>> + * this we are going to ensure that each host will 
> always observe 
> MT> >>> + * timestamps as increasing no matter the stream: by 
> dropping ports
> MT> >>> + * from the equation.
> MT> >>> + */ 
> MT> >>> +struct in_conninfo inc_copy = *inc;
> MT> >>> +
> MT> >>> +inc_copy.inc_fport = 0;
> MT> >>> +inc_copy.inc_lport = 0;
> MT> >>> +
> MT> >>> + return (tcp_keyed_hash(&inc_copy, V_ts_offset_secret)

Re: How to set up ipfw(8) NAT between an alias and the main IP address, when the alias is in another network?

2019-07-17 Thread Vinícius Zavam via freebsd-net
Am Sa., 6. Juli 2019 um 08:02 Uhr schrieb Yuri :

> My network interface looks like this:
>
> sk0: flags=8843 metric 0 mtu 1500
>  options=80009
>  ether 01:3c:47:8a:17:12
>  inet 192.168.1.2 netmask 0xff00 broadcast 192.168.1.255
>  inet 192.168.100.2 netmask 0x broadcast 192.168.100.2
>  media: Ethernet autoselect (100baseTX )
>  status: active
>  nd6 options=29
>
> The second IP address is an alias that is used for jail.
>
> I would like to set up NAT so that this jail would access the internet
> through the same interface.
>
>
> I tried this script:
>
>
> fw="/sbin/ipfw -q"
>
> $fw nat 1 config redirect_addr 192.168.100.2 192.168.1.2 redirect_addr
> 192.168.1.2 192.168.100.2 if sk0 unreg_only reset
>
> $fw add 1001 nat 1 tcp from 192.168.100.2/32 to any via sk0 keep-state
>
> $fw add 1002 check-state
>
>
> The rule 1001 has keep-state, therefore it should process both outgoing
> tcp and incoming response packets. But the outbound packets are NATted,
> but the inbound ones are not.
>
> What is wrong, and how to fix this script?
>
>
> Thank you,
>
> Yuri
>

jail ... ip4=inherit ?


-- 
Vinícius Zavam
keybase.io/egypcio
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Vitalij Satanivskij
Hello again

Michael Tuexen wrote:
MT> > On 17. Jul 2019, at 09:42, Vitalij Satanivskij  wrote:
MT> > 
MT> > 
MT> > 
MT> > Hello. 
MT> > 
MT> > Is there any changes about this problem
MT> > 
MT> > 
MT> > I'm using FreeBSD 12 on my desktop and can confirm problem occur with 
some hosts.
MT> Can you provide a list of some of these hosts?
MT> I'll put up a change for review later today.


Here some hosts.

5.9.242.150 https://vitagramma.com
77.120.8.194 https://volia.com
31.41.220.92 https://moemisto.ua
185.5.72.33 https://fotostrana.ru

Problem can be seen by sending curl request to hosts in serial (manual, so 
delay it's from few msec to few sec)

Or by using proxy on machine with parallel/serial request's (eq squid or 
reverse proxy in nginx)

On system before https://svnweb.freebsd.org/base?view=revision&revision=338053 
such behavior not seen.

MT> 
MT> In the meantime you can deal with the buggy hosts by disabling the 
timestamps
MT> or dropping extensions on SYN retransmits.

You meen by some code changes?


MT> 
MT> Best regards
MT> Michael
MT> > 
MT> > 
MT> > 
MT> > Michael Tuexen wrote:
MT> > MT> 
MT> > MT> 
MT> > MT> > On 9. Jul 2019, at 14:58, Paul  wrote:
MT> > MT> > 
MT> > MT> > Hi Michael,
MT> > MT> > 
MT> > MT> > 9 July 2019, 15:34:29, by "Michael Tuexen" :
MT> > MT> > 
MT> > MT> >> 
MT> > MT> >> 
MT> > MT> >>> On 8. Jul 2019, at 17:22, Paul  wrote:
MT> > MT> >>> 
MT> > MT> >>> 
MT> > MT> >>> 
MT> > MT> >>> 8 July 2019, 17:12:21, by "Michael Tuexen" :
MT> > MT> >>> 
MT> > MT> > On 8. Jul 2019, at 15:24, Paul  wrote:
MT> > MT> > 
MT> > MT> > Hi Michael,
MT> > MT> > 
MT> > MT> > 8 July 2019, 15:53:15, by "Michael Tuexen" :
MT> > MT> > 
MT> > MT> >>> On 8. Jul 2019, at 12:37, Paul  wrote:
MT> > MT> >>> 
MT> > MT> >>> Hi team,
MT> > MT> >>> 
MT> > MT> >>> Recently we had an upgrade to 12 Stable. Immediately after, 
we have started 
MT> > MT> >>> seeing some strange connection establishment timeouts to some 
fixed number
MT> > MT> >>> of external (world) hosts. The issue was persistent and easy 
to reproduce.
MT> > MT> >>> Thanks to a patience and dedication of our system engineer we 
have tracked  
MT> > MT> >>> this issue down to a specific commit:
MT> > MT> >>> 
MT> > MT> >>> https://svnweb.freebsd.org/base?view=revision&revision=338053
MT> > MT> >>> 
MT> > MT> >>> This patch was also back-ported into 11 Stable:
MT> > MT> >>> 
MT> > MT> >>> https://svnweb.freebsd.org/base?view=revision&revision=348435
MT> > MT> >>> 
MT> > MT> >>> Among other things this patch changes the timestamp 
allocation strategy,
MT> > MT> >>> by introducing a deterministic randomness via a hash function 
that takes
MT> > MT> >>> into account a random key as well as source address, source 
port, dest
MT> > MT> >>> address and dest port. As the result, timestamp offsets of 
different
MT> > MT> >>> tuples (SA,SP,DA,DP) will be wildly different and will jump 
from small 
MT> > MT> >>> to large numbers and back, as long as something in the tuple 
changes.
MT> > MT> >> Hi Paul,
MT> > MT> >> 
MT> > MT> >> this is correct.
MT> > MT> >> 
MT> > MT> >> Please note that the same happens with the old method, if two 
hosts with
MT> > MT> >> different uptimes are bind a consumer grade NAT.
MT> > MT> > 
MT> > MT> > If NAT does not replace timestamps then yes, it should be the 
case.
MT> > MT> > 
MT> > MT> >>> 
MT> > MT> >>> After performing various tests of hosts that produce the 
above mentioned 
MT> > MT> >>> issue we came to conclusion that there are some interesting 
implementations 
MT> > MT> >>> that drop SYN packets with timestamps smaller  than the 
largest timestamp 
MT> > MT> >>> value from streams of all recent or current connections from 
a specific 
MT> > MT> >>> address. This looks as some kind of SYN flood protection.
MT> > MT> >> This also breaks multiple hosts with different uptimes behind 
a consumer
MT> > MT> >> level NAT talking to such a server.
MT> > MT> >>> 
MT> > MT> >>> To ensure that each external host is not going to see a wild 
jumps of 
MT> > MT> >>> timestamp values I propose a patch that removes ports from 
the equation
MT> > MT> >>> all together, when calculating the timestamp offset:
MT> > MT> >>> 
MT> > MT> >>> Index: sys/netinet/tcp_subr.c
MT> > MT> >>> 
===
MT> > MT> >>> --- sys/netinet/tcp_subr.c(revision 348435)
MT> > MT> >>> +++ sys/netinet/tcp_subr.c(working copy)
MT> > MT> >>> @@ -2224,7 +2224,22 @@
MT> > MT> >>> uint32_t
MT> > MT> >>> tcp_new_ts_offset(struct in_conninfo *inc)
MT> > MT> >>> {
MT> > MT> >>> - return (tcp_keyed_hash(inc, V_ts_offset_secret));
MT> > MT> >>> +/* 
MT> > MT> >>> + * Some implementations show a strange behaviour 
when a wildly random 
MT> >

Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Michael Tuexen
> On 17. Jul 2019, at 12:09, Vitalij Satanivskij  wrote:
> 
> Hello again
> 
> Michael Tuexen wrote:
> MT> > On 17. Jul 2019, at 09:42, Vitalij Satanivskij  wrote:
> MT> > 
> MT> > 
> MT> > 
> MT> > Hello. 
> MT> > 
> MT> > Is there any changes about this problem
> MT> > 
> MT> > 
> MT> > I'm using FreeBSD 12 on my desktop and can confirm problem occur with 
> some hosts.
> MT> Can you provide a list of some of these hosts?
> MT> I'll put up a change for review later today.
> 
> 
> Here some hosts.
> 
> 5.9.242.150 https://vitagramma.com
> 77.120.8.194 https://volia.com
> 31.41.220.92 https://moemisto.ua
> 185.5.72.33 https://fotostrana.ru
OK, thanks. That might help to figure out what is broken exactly. I'm not yet 
sure
if it is a broken end point implementation or a middlebox making false 
assumptions.
> 
> Problem can be seen by sending curl request to hosts in serial (manual, so 
> delay it's from few msec to few sec)
> 
> Or by using proxy on machine with parallel/serial request's (eq squid or 
> reverse proxy in nginx)
> 
> On system before 
> https://svnweb.freebsd.org/base?view=revision&revision=338053 such behavior 
> not seen.
> 
> MT> 
> MT> In the meantime you can deal with the buggy hosts by disabling the 
> timestamps
> MT> or dropping extensions on SYN retransmits.
> 
> You meen by some code changes?
No.

Two options:

Option 1: Drop the TCP timestamp option on the third retransmission
To enable this, you configure on the client
sudo sysctl -w net.inet.tcp.rexmit_drop_options=1
or put
net.inet.tcp.rexmit_drop_options=1
in /etc/sysctl.conf
and reboot
In case of the broken host, the first SYN retransmission will happen 1 second 
after the
initial SYN segment, the second retransmission will happen 1.2 seconds after 
the first. On the
third retransmission, which happens again 1.2 seconds later, the TCP timestamp 
option is
dropped and the connection setup will succeed. This gives you a total delay of 
3.4 seconds
on connection setup instead of the longer timeout.

Option 2: Disable the TCP timestamps (and window scaling)
To enable this, you configure on the client
sudo sysctl -w net.inet.tcp.rfc1323=0
or put
net.inet.tcp.rfc1323=0
in /etc/sysctl.conf
and reboot.
This disables the timestamp option and window scaling completely. This allows 
you to
setup the connections without any delay. However, you don't have the benefits 
of the
extension.

Both options don't require any code changes.

Best regards
Michael


> 
> 
> MT> 
> MT> Best regards
> MT> Michael
> MT> > 
> MT> > 
> MT> > 
> MT> > Michael Tuexen wrote:
> MT> > MT> 
> MT> > MT> 
> MT> > MT> > On 9. Jul 2019, at 14:58, Paul  wrote:
> MT> > MT> > 
> MT> > MT> > Hi Michael,
> MT> > MT> > 
> MT> > MT> > 9 July 2019, 15:34:29, by "Michael Tuexen" :
> MT> > MT> > 
> MT> > MT> >> 
> MT> > MT> >> 
> MT> > MT> >>> On 8. Jul 2019, at 17:22, Paul  wrote:
> MT> > MT> >>> 
> MT> > MT> >>> 
> MT> > MT> >>> 
> MT> > MT> >>> 8 July 2019, 17:12:21, by "Michael Tuexen" :
> MT> > MT> >>> 
> MT> > MT> > On 8. Jul 2019, at 15:24, Paul  wrote:
> MT> > MT> > 
> MT> > MT> > Hi Michael,
> MT> > MT> > 
> MT> > MT> > 8 July 2019, 15:53:15, by "Michael Tuexen" 
> :
> MT> > MT> > 
> MT> > MT> >>> On 8. Jul 2019, at 12:37, Paul  wrote:
> MT> > MT> >>> 
> MT> > MT> >>> Hi team,
> MT> > MT> >>> 
> MT> > MT> >>> Recently we had an upgrade to 12 Stable. Immediately after, 
> we have started 
> MT> > MT> >>> seeing some strange connection establishment timeouts to 
> some fixed number
> MT> > MT> >>> of external (world) hosts. The issue was persistent and 
> easy to reproduce.
> MT> > MT> >>> Thanks to a patience and dedication of our system engineer 
> we have tracked  
> MT> > MT> >>> this issue down to a specific commit:
> MT> > MT> >>> 
> MT> > MT> >>> 
> https://svnweb.freebsd.org/base?view=revision&revision=338053
> MT> > MT> >>> 
> MT> > MT> >>> This patch was also back-ported into 11 Stable:
> MT> > MT> >>> 
> MT> > MT> >>> 
> https://svnweb.freebsd.org/base?view=revision&revision=348435
> MT> > MT> >>> 
> MT> > MT> >>> Among other things this patch changes the timestamp 
> allocation strategy,
> MT> > MT> >>> by introducing a deterministic randomness via a hash 
> function that takes
> MT> > MT> >>> into account a random key as well as source address, source 
> port, dest
> MT> > MT> >>> address and dest port. As the result, timestamp offsets of 
> different
> MT> > MT> >>> tuples (SA,SP,DA,DP) will be wildly different and will jump 
> from small 
> MT> > MT> >>> to large numbers and back, as long as something in the 
> tuple changes.
> MT> > MT> >> Hi Paul,
> MT> > MT> >> 
> MT> > MT> >> this is correct.
> MT> > MT> >> 
> MT> > MT> >> Please note that the same happens with the old method, if 
> two hosts with
> MT> > MT> >> different uptimes are bind a consumer grade NAT.
> MT> > MT> > 
> MT> > MT> > If NAT does not rep

Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Vitalij Satanivskij
MT> > MT> In the meantime you can deal with the buggy hosts by disabling the 
timestamps
MT> > MT> or dropping extensions on SYN retransmits.
MT> > 
MT> > You meen by some code changes?
MT> No.
MT> 
MT> Two options:
MT> 
MT> Option 1: Drop the TCP timestamp option on the third retransmission
MT> To enable this, you configure on the client
MT> sudo sysctl -w net.inet.tcp.rexmit_drop_options=1
MT> or put
MT> net.inet.tcp.rexmit_drop_options=1
MT> in /etc/sysctl.conf
MT> and reboot
MT> In case of the broken host, the first SYN retransmission will happen 1 
second after the
MT> initial SYN segment, the second retransmission will happen 1.2 seconds 
after the first. On the
MT> third retransmission, which happens again 1.2 seconds later, the TCP 
timestamp option is
MT> dropped and the connection setup will succeed. This gives you a total delay 
of 3.4 seconds
MT> on connection setup instead of the longer timeout.

First Option is not working. Steel see same behave.


MT> 
MT> Option 2: Disable the TCP timestamps (and window scaling)
MT> To enable this, you configure on the client
MT> sudo sysctl -w net.inet.tcp.rfc1323=0
MT> or put
MT> net.inet.tcp.rfc1323=0
MT> in /etc/sysctl.conf
MT> and reboot.
MT> This disables the timestamp option and window scaling completely. This 
allows you to
MT> setup the connections without any delay. However, you don't have the 
benefits of the
MT> extension.
MT> 
MT> Both options don't require any code changes.

This option was tested some time before. Yep it's help. But overal performance 
of tcp networking ... Let's say to bad :(




MT> Best regards
MT> Michael
MT> 
MT> 
MT> > 
MT> > 
MT> > MT> 
MT> > MT> Best regards
MT> > MT> Michael
MT> > MT> > 
MT> > MT> > 
MT> > MT> > 
MT> > MT> > Michael Tuexen wrote:
MT> > MT> > MT> 
MT> > MT> > MT> 
MT> > MT> > MT> > On 9. Jul 2019, at 14:58, Paul  wrote:
MT> > MT> > MT> > 
MT> > MT> > MT> > Hi Michael,
MT> > MT> > MT> > 
MT> > MT> > MT> > 9 July 2019, 15:34:29, by "Michael Tuexen" 
:
MT> > MT> > MT> > 
MT> > MT> > MT> >> 
MT> > MT> > MT> >> 
MT> > MT> > MT> >>> On 8. Jul 2019, at 17:22, Paul  wrote:
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> 8 July 2019, 17:12:21, by "Michael Tuexen" 
:
MT> > MT> > MT> >>> 
MT> > MT> > MT> > On 8. Jul 2019, at 15:24, Paul  wrote:
MT> > MT> > MT> > 
MT> > MT> > MT> > Hi Michael,
MT> > MT> > MT> > 
MT> > MT> > MT> > 8 July 2019, 15:53:15, by "Michael Tuexen" 
:
MT> > MT> > MT> > 
MT> > MT> > MT> >>> On 8. Jul 2019, at 12:37, Paul  wrote:
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> Hi team,
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> Recently we had an upgrade to 12 Stable. Immediately 
after, we have started 
MT> > MT> > MT> >>> seeing some strange connection establishment timeouts 
to some fixed number
MT> > MT> > MT> >>> of external (world) hosts. The issue was persistent and 
easy to reproduce.
MT> > MT> > MT> >>> Thanks to a patience and dedication of our system 
engineer we have tracked  
MT> > MT> > MT> >>> this issue down to a specific commit:
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> 
https://svnweb.freebsd.org/base?view=revision&revision=338053
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> This patch was also back-ported into 11 Stable:
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> 
https://svnweb.freebsd.org/base?view=revision&revision=348435
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> Among other things this patch changes the timestamp 
allocation strategy,
MT> > MT> > MT> >>> by introducing a deterministic randomness via a hash 
function that takes
MT> > MT> > MT> >>> into account a random key as well as source address, 
source port, dest
MT> > MT> > MT> >>> address and dest port. As the result, timestamp offsets 
of different
MT> > MT> > MT> >>> tuples (SA,SP,DA,DP) will be wildly different and will 
jump from small 
MT> > MT> > MT> >>> to large numbers and back, as long as something in the 
tuple changes.
MT> > MT> > MT> >> Hi Paul,
MT> > MT> > MT> >> 
MT> > MT> > MT> >> this is correct.
MT> > MT> > MT> >> 
MT> > MT> > MT> >> Please note that the same happens with the old method, 
if two hosts with
MT> > MT> > MT> >> different uptimes are bind a consumer grade NAT.
MT> > MT> > MT> > 
MT> > MT> > MT> > If NAT does not replace timestamps then yes, it should be 
the case.
MT> > MT> > MT> > 
MT> > MT> > MT> >>> 
MT> > MT> > MT> >>> After performing various tests of hosts that produce 
the above mentioned 
MT> > MT> > MT> >>> issue we came to conclusion that there are some 
interesting implementations 
MT> > MT> > MT> >>> that drop SYN packets with timestamps smaller  than the 
largest timestamp 
MT> > MT> > MT> >>> value from streams of all recent or current connections 
from a specific 
MT> > MT> > MT> >>> address. This looks as some kind of SYN flood 
protection.
MT> > MT> > MT> >> This also breaks m

Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Michael Tuexen
> On 17. Jul 2019, at 13:55, Vitalij Satanivskij  wrote:
> 
> MT> > MT> In the meantime you can deal with the buggy hosts by disabling the 
> timestamps
> MT> > MT> or dropping extensions on SYN retransmits.
> MT> > 
> MT> > You meen by some code changes?
> MT> No.
> MT> 
> MT> Two options:
> MT> 
> MT> Option 1: Drop the TCP timestamp option on the third retransmission
> MT> To enable this, you configure on the client
> MT> sudo sysctl -w net.inet.tcp.rexmit_drop_options=1
> MT> or put
> MT> net.inet.tcp.rexmit_drop_options=1
> MT> in /etc/sysctl.conf
> MT> and reboot
> MT> In case of the broken host, the first SYN retransmission will happen 1 
> second after the
> MT> initial SYN segment, the second retransmission will happen 1.2 seconds 
> after the first. On the
> MT> third retransmission, which happens again 1.2 seconds later, the TCP 
> timestamp option is
> MT> dropped and the connection setup will succeed. This gives you a total 
> delay of 3.4 seconds
> MT> on connection setup instead of the longer timeout.
> 
> First Option is not working. Steel see same behave.
Interesting. It works for me:

tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0  33637  0 --:--:-- --:--:-- --:--:-- 33575
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0   4834  0 --:--:--  0:00:03 --:--:--  4833
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0  35813  0 --:--:-- --:--:-- --:--:-- 35813
tuexen@head:~ % time curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0  48320  0 --:--:-- --:--:-- --:--:-- 48320
0.012u 0.031s 0:00.39 10.2% 140+245k 0+0io 0pf+0w
tuexen@head:~ % time curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0   4592  0 --:--:--  0:00:03 --:--:--  4591
0.031u 0.010s 0:03.99 1.0%  80+140k 0+0io 0pf+0w
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0  37815  0 --:--:-- --:--:-- --:--:-- 37737
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0  27261  0 --:--:-- --:--:-- --:--:-- 27220
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0   4533  0 --:--:--  0:00:04 --:--:--  4533
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0  48320  0 --:--:-- --:--:-- --:--:-- 48192
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0   4746  0 --:--:--  0:00:03 --:--:--  4745
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0   4500  0 --:--:--  0:00:04 --:--:--  4767
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0   4726  0 --:--:--  0:00:03 --:--:--  4726
tuexen@head:~ % curl https://vitagramma.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 182650 182650 0 

Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Vitalij Satanivskij
Hmm, looks like with some host's work but not with another

Wed/17.07:/home/satan
hell:-1522/15:28>curl https://volia.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 415190 415190 0   137k  0 --:--:-- --:--:-- --:--:--  137k
Wed/17.07:/home/satan
hell:-1523/15:28>curl https://volia.com > /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:--  0:00:53 --:--:-- 0^C
Wed/17.07:/home/satan
hell:-1524/15:29>sysctl net.inet.tcp.rexmit_drop_options
net.inet.tcp.rexmit_drop_options: 1

But 

MT> Interesting. It works for me:
MT> 
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0  33637  0 --:--:-- --:--:-- --:--:-- 
33575
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0   4834  0 --:--:--  0:00:03 --:--:--  
4833
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0  35813  0 --:--:-- --:--:-- --:--:-- 
35813
MT> tuexen@head:~ % time curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0  48320  0 --:--:-- --:--:-- --:--:-- 
48320
MT> 0.012u 0.031s 0:00.39 10.2% 140+245k 0+0io 0pf+0w
MT> tuexen@head:~ % time curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0   4592  0 --:--:--  0:00:03 --:--:--  
4591
MT> 0.031u 0.010s 0:03.99 1.0%  80+140k 0+0io 0pf+0w
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0  37815  0 --:--:-- --:--:-- --:--:-- 
37737
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0  27261  0 --:--:-- --:--:-- --:--:-- 
27220
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0   4533  0 --:--:--  0:00:04 --:--:--  
4533
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0  48320  0 --:--:-- --:--:-- --:--:-- 
48192
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0   4746  0 --:--:--  0:00:03 --:--:--  
4745
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0   4500  0 --:--:--  0:00:04 --:--:--  
4767
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0   4726  0 --:--:--  0:00:03 --:--:--  
4726
MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
MT>  Dload  Upload   Total   SpentLeft  
Speed
MT> 100 182650 182650 0  34268  0 --:--:-- --:--:

Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Michael Tuexen
> On 17. Jul 2019, at 14:32, Vitalij Satanivskij  wrote:
> 
> Hmm, looks like with some host's work but not with another
> 
> Wed/17.07:/home/satan
> hell:-1522/15:28>curl https://volia.com > /dev/null
>  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
> Dload  Upload   Total   SpentLeft  Speed
> 100 415190 415190 0   137k  0 --:--:-- --:--:-- --:--:--  137k
> Wed/17.07:/home/satan
> hell:-1523/15:28>curl https://volia.com > /dev/null
>  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
> Dload  Upload   Total   SpentLeft  Speed
>  0 00 00 0  0  0 --:--:--  0:00:53 --:--:-- 
> 0^C
> Wed/17.07:/home/satan
> hell:-1524/15:29>sysctl net.inet.tcp.rexmit_drop_options
> net.inet.tcp.rexmit_drop_options: 1
OK, I can confirm that for https://volia.com only a timeout helps.

What I observed for now is that for the "blocking" to occur is it crucial that
the server sends the FIN and therefore goes into the TIMEWAIT state. The timeout
seems to be 60 seconds.
The blocking is also not limited to a single server port.

I'm not sure yet whether it is a broken end point or a broken middle box.

Best regards
Michael
> 
> But 
> 
> MT> Interesting. It works for me:
> MT> 
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0  33637  0 --:--:-- --:--:-- --:--:-- 
> 33575
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0   4834  0 --:--:--  0:00:03 --:--:--  
> 4833
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0  35813  0 --:--:-- --:--:-- --:--:-- 
> 35813
> MT> tuexen@head:~ % time curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0  48320  0 --:--:-- --:--:-- --:--:-- 
> 48320
> MT> 0.012u 0.031s 0:00.39 10.2%   140+245k 0+0io 0pf+0w
> MT> tuexen@head:~ % time curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0   4592  0 --:--:--  0:00:03 --:--:--  
> 4591
> MT> 0.031u 0.010s 0:03.99 1.0%80+140k 0+0io 0pf+0w
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0  37815  0 --:--:-- --:--:-- --:--:-- 
> 37737
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0  27261  0 --:--:-- --:--:-- --:--:-- 
> 27220
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0   4533  0 --:--:--  0:00:04 --:--:--  
> 4533
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0  48320  0 --:--:-- --:--:-- --:--:-- 
> 48192
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 182650 182650 0   4746  0 --:--:--  0:00:03 --:--:--  
> 4745
> MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
> MT>  Dload  Upload   Total   SpentLeft  
> Speed
> MT> 100 18265   

Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Kevin Bowling
Any knowledge of the endpoints, Linux boxes misconfigured with
tcp_tw_recycle?

On Wed, Jul 17, 2019 at 5:42 AM Michael Tuexen  wrote:

> > On 17. Jul 2019, at 14:32, Vitalij Satanivskij  wrote:
> >
> > Hmm, looks like with some host's work but not with another
> >
> > Wed/17.07:/home/satan
> > hell:-1522/15:28>curl https://volia.com > /dev/null
> >  % Total% Received % Xferd  Average Speed   TimeTime Time
> Current
> > Dload  Upload   Total   SpentLeft
> Speed
> > 100 415190 415190 0   137k  0 --:--:-- --:--:--
> --:--:--  137k
> > Wed/17.07:/home/satan
> > hell:-1523/15:28>curl https://volia.com > /dev/null
> >  % Total% Received % Xferd  Average Speed   TimeTime Time
> Current
> > Dload  Upload   Total   SpentLeft
> Speed
> >  0 00 00 0  0  0 --:--:--  0:00:53 --:--:--
>0^C
> > Wed/17.07:/home/satan
> > hell:-1524/15:29>sysctl net.inet.tcp.rexmit_drop_options
> > net.inet.tcp.rexmit_drop_options: 1
> OK, I can confirm that for https://volia.com only a timeout helps.
>
> What I observed for now is that for the "blocking" to occur is it crucial
> that
> the server sends the FIN and therefore goes into the TIMEWAIT state. The
> timeout
> seems to be 60 seconds.
> The blocking is also not limited to a single server port.
>
> I'm not sure yet whether it is a broken end point or a broken middle box.
>
> Best regards
> Michael
> >
> > But
> >
> > MT> Interesting. It works for me:
> > MT>
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0  33637  0 --:--:-- --:--:--
> --:--:-- 33575
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0   4834  0 --:--:--  0:00:03
> --:--:--  4833
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0  35813  0 --:--:-- --:--:--
> --:--:-- 35813
> > MT> tuexen@head:~ % time curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0  48320  0 --:--:-- --:--:--
> --:--:-- 48320
> > MT> 0.012u 0.031s 0:00.39 10.2%   140+245k 0+0io 0pf+0w
> > MT> tuexen@head:~ % time curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0   4592  0 --:--:--  0:00:03
> --:--:--  4591
> > MT> 0.031u 0.010s 0:03.99 1.0%80+140k 0+0io 0pf+0w
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0  37815  0 --:--:-- --:--:--
> --:--:-- 37737
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0  27261  0 --:--:-- --:--:--
> --:--:-- 27220
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0   4533  0 --:--:--  0:00:04
> --:--:--  4533
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0  48320  0 --:--:-- --:--:--
> --:--:-- 48192
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime
>  Time  Current
> > MT>  Dload  Upload   Total   Spent
> Left  Speed
> > MT> 100 182650 182650 0   4746  0 --:--:--  0:00:03
> --:--:--  4745
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/nu

[Differential] D19422: if_vxlan(4) Allow set MTU more than 1500 bytes.

2019-07-17 Thread aleksandr.fedorov_itglobal.com (Aleksandr Fedorov)
aleksandr.fedorov_itglobal.com added a comment.


  ping?

CHANGES SINCE LAST ACTION
  https://reviews.freebsd.org/D19422/new/

REVISION DETAIL
  https://reviews.freebsd.org/D19422

EMAIL PREFERENCES
  https://reviews.freebsd.org/settings/panel/emailpreferences/

To: aleksandr.fedorov_itglobal.com, bryanv, hrs, #network, rgrimes, krion, jhb
Cc: evgueni.gavrilov_itglobal.com, olevole_olevole.ru, ae, freebsd-net-list, 
krzysztof.galazka_intel.com
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Michael Tuexen
> On 17. Jul 2019, at 18:09, Kevin Bowling  wrote:
> 
> Any knowledge of the endpoints, Linux boxes misconfigured with tcp_tw_recycle?
I contacted some Linux guys and they told me that tcp_tw_recycle is specific
to the 5 tuple. Is that not correct?
The servers can be running Linux, I haven't checked all of them...

For the problem we are seeing here, it port numbers are irrelevant. If the
server sends a FIN (and therefore goes to TIMEWAIT), one can experience the
problem, even if the client changes the port number and even if you talk
to other server ports. I tested with the ssh port in addition to the web
traffic.

Best regards
Michael
> 
> On Wed, Jul 17, 2019 at 5:42 AM Michael Tuexen  wrote:
> > On 17. Jul 2019, at 14:32, Vitalij Satanivskij  wrote:
> > 
> > Hmm, looks like with some host's work but not with another
> > 
> > Wed/17.07:/home/satan
> > hell:-1522/15:28>curl https://volia.com > /dev/null
> >  % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > Dload  Upload   Total   SpentLeft  Speed
> > 100 415190 415190 0   137k  0 --:--:-- --:--:-- --:--:--  
> > 137k
> > Wed/17.07:/home/satan
> > hell:-1523/15:28>curl https://volia.com > /dev/null
> >  % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > Dload  Upload   Total   SpentLeft  Speed
> >  0 00 00 0  0  0 --:--:--  0:00:53 --:--:-- 
> > 0^C
> > Wed/17.07:/home/satan
> > hell:-1524/15:29>sysctl net.inet.tcp.rexmit_drop_options
> > net.inet.tcp.rexmit_drop_options: 1
> OK, I can confirm that for https://volia.com only a timeout helps.
> 
> What I observed for now is that for the "blocking" to occur is it crucial that
> the server sends the FIN and therefore goes into the TIMEWAIT state. The 
> timeout
> seems to be 60 seconds.
> The blocking is also not limited to a single server port.
> 
> I'm not sure yet whether it is a broken end point or a broken middle box.
> 
> Best regards
> Michael
> > 
> > But 
> > 
> > MT> Interesting. It works for me:
> > MT> 
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > MT>  Dload  Upload   Total   SpentLeft  
> > Speed
> > MT> 100 182650 182650 0  33637  0 --:--:-- --:--:-- 
> > --:--:-- 33575
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > MT>  Dload  Upload   Total   SpentLeft  
> > Speed
> > MT> 100 182650 182650 0   4834  0 --:--:--  0:00:03 
> > --:--:--  4833
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > MT>  Dload  Upload   Total   SpentLeft  
> > Speed
> > MT> 100 182650 182650 0  35813  0 --:--:-- --:--:-- 
> > --:--:-- 35813
> > MT> tuexen@head:~ % time curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > MT>  Dload  Upload   Total   SpentLeft  
> > Speed
> > MT> 100 182650 182650 0  48320  0 --:--:-- --:--:-- 
> > --:--:-- 48320
> > MT> 0.012u 0.031s 0:00.39 10.2%   140+245k 0+0io 0pf+0w
> > MT> tuexen@head:~ % time curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > MT>  Dload  Upload   Total   SpentLeft  
> > Speed
> > MT> 100 182650 182650 0   4592  0 --:--:--  0:00:03 
> > --:--:--  4591
> > MT> 0.031u 0.010s 0:03.99 1.0%80+140k 0+0io 0pf+0w
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > MT>  Dload  Upload   Total   SpentLeft  
> > Speed
> > MT> 100 182650 182650 0  37815  0 --:--:-- --:--:-- 
> > --:--:-- 37737
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > MT>  Dload  Upload   Total   SpentLeft  
> > Speed
> > MT> 100 182650 182650 0  27261  0 --:--:-- --:--:-- 
> > --:--:-- 27220
> > MT> tuexen@head:~ % curl https://vitagramma.com > /dev/null
> > MT>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> > Current
> > MT>  Dload  Upload   Total   SpentLeft  
> > Speed
> > MT> 100 182650 182650 0   4533  0 --:--:--  0:00:04 
> > --:--:--  4533
>

Re: Issues with TCP Timestamps allocation

2019-07-17 Thread Michael Tuexen
> On 17. Jul 2019, at 09:42, Vitalij Satanivskij  wrote:
> 
> 
> 
> Hello. 
> 
> Is there any changes about this problem
Please find a patch in https://reviews.freebsd.org/D20980

If possible, please test and report.

Best regards
Michael
> 
> 
> I'm using FreeBSD 12 on my desktop and can confirm problem occur with some 
> hosts.
> 
> 
> 
> Michael Tuexen wrote:
> MT> 
> MT> 
> MT> > On 9. Jul 2019, at 14:58, Paul  wrote:
> MT> > 
> MT> > Hi Michael,
> MT> > 
> MT> > 9 July 2019, 15:34:29, by "Michael Tuexen" :
> MT> > 
> MT> >> 
> MT> >> 
> MT> >>> On 8. Jul 2019, at 17:22, Paul  wrote:
> MT> >>> 
> MT> >>> 
> MT> >>> 
> MT> >>> 8 July 2019, 17:12:21, by "Michael Tuexen" :
> MT> >>> 
> MT> > On 8. Jul 2019, at 15:24, Paul  wrote:
> MT> > 
> MT> > Hi Michael,
> MT> > 
> MT> > 8 July 2019, 15:53:15, by "Michael Tuexen" :
> MT> > 
> MT> >>> On 8. Jul 2019, at 12:37, Paul  wrote:
> MT> >>> 
> MT> >>> Hi team,
> MT> >>> 
> MT> >>> Recently we had an upgrade to 12 Stable. Immediately after, we 
> have started 
> MT> >>> seeing some strange connection establishment timeouts to some 
> fixed number
> MT> >>> of external (world) hosts. The issue was persistent and easy to 
> reproduce.
> MT> >>> Thanks to a patience and dedication of our system engineer we 
> have tracked  
> MT> >>> this issue down to a specific commit:
> MT> >>> 
> MT> >>> https://svnweb.freebsd.org/base?view=revision&revision=338053
> MT> >>> 
> MT> >>> This patch was also back-ported into 11 Stable:
> MT> >>> 
> MT> >>> https://svnweb.freebsd.org/base?view=revision&revision=348435
> MT> >>> 
> MT> >>> Among other things this patch changes the timestamp allocation 
> strategy,
> MT> >>> by introducing a deterministic randomness via a hash function 
> that takes
> MT> >>> into account a random key as well as source address, source port, 
> dest
> MT> >>> address and dest port. As the result, timestamp offsets of 
> different
> MT> >>> tuples (SA,SP,DA,DP) will be wildly different and will jump from 
> small 
> MT> >>> to large numbers and back, as long as something in the tuple 
> changes.
> MT> >> Hi Paul,
> MT> >> 
> MT> >> this is correct.
> MT> >> 
> MT> >> Please note that the same happens with the old method, if two 
> hosts with
> MT> >> different uptimes are bind a consumer grade NAT.
> MT> > 
> MT> > If NAT does not replace timestamps then yes, it should be the case.
> MT> > 
> MT> >>> 
> MT> >>> After performing various tests of hosts that produce the above 
> mentioned 
> MT> >>> issue we came to conclusion that there are some interesting 
> implementations 
> MT> >>> that drop SYN packets with timestamps smaller  than the largest 
> timestamp 
> MT> >>> value from streams of all recent or current connections from a 
> specific 
> MT> >>> address. This looks as some kind of SYN flood protection.
> MT> >> This also breaks multiple hosts with different uptimes behind a 
> consumer
> MT> >> level NAT talking to such a server.
> MT> >>> 
> MT> >>> To ensure that each external host is not going to see a wild 
> jumps of 
> MT> >>> timestamp values I propose a patch that removes ports from the 
> equation
> MT> >>> all together, when calculating the timestamp offset:
> MT> >>> 
> MT> >>> Index: sys/netinet/tcp_subr.c
> MT> >>> 
> ===
> MT> >>> --- sys/netinet/tcp_subr.c(revision 348435)
> MT> >>> +++ sys/netinet/tcp_subr.c(working copy)
> MT> >>> @@ -2224,7 +2224,22 @@
> MT> >>> uint32_t
> MT> >>> tcp_new_ts_offset(struct in_conninfo *inc)
> MT> >>> {
> MT> >>> - return (tcp_keyed_hash(inc, V_ts_offset_secret));
> MT> >>> +/* 
> MT> >>> + * Some implementations show a strange behaviour when a 
> wildly random 
> MT> >>> + * timestamps allocated for different streams. It seems 
> that only the
> MT> >>> + * SYN packets are affected. Observed implementations 
> drop SYN packets
> MT> >>> + * with timestamps smaller than the largest timestamp 
> value of all 
> MT> >>> + * recent or current connections from specific a 
> address. To mitigate 
> MT> >>> + * this we are going to ensure that each host will 
> always observe 
> MT> >>> + * timestamps as increasing no matter the stream: by 
> dropping ports
> MT> >>> + * from the equation.
> MT> >>> + */ 
> MT> >>> +struct in_conninfo inc_copy = *inc;
> MT> >>> +
> MT> >>> +inc_copy.inc_fport = 0;
> MT> >>> +inc_copy.inc_lport = 0;
> MT> >>> +
> MT> >>> + return (tcp_keyed_hash(&inc_copy, V_ts_offset_secret));
> MT> >>> }
> MT> >>> 
> MT> >>> /*
> MT> >>> 
> MT> >>> In any case, the solution of the uptim

[Bug 238796] ipfilter: fix unremovable rules and rules checksum for comparison

2019-07-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238796

Cy Schubert  changed:

   What|Removed |Added

 Attachment #205808|0   |1
is obsolete||

--- Comment #23 from Cy Schubert  ---
Created attachment 205851
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=205851&action=edit
This should fix this PR.

Can you try this pstch? Make sure your tree is up to date before applying it.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 238796] ipfilter: fix unremovable rules and rules checksum for comparison

2019-07-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238796

--- Comment #24 from Cy Schubert  ---
The problem was that the following fix in 2009, ip_fil.h r1.31 and fil.c r1.53,
is incomplete. A number of issues not relating to this PR have already been
fixed. The posted patch directly fixes this PR.

The upstream fix was incomplete:
2580062 from/to targets should be able to use any interface name

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"