Howto: ipsec tunnel routing both IPv4 and IPv6? Possible?
Hi, I do use an ipsec tunnel for routing local IPv4 traffic for years now (/etc/rc.conf): cloned_interfaces="ipsec0" static_routes="tunnel0" create_args_ipsec0="reqid 104" ifconfig_ipsec0="inet 10.2.2.250 10.1.1.254 tunnel 1.2.3.4 10.20.30.40" route_tunnel0="10.1.1.0/24 10.1.1.254" ifconfig ipsec0 (erelevant info, only): ipsec0: flags=1008051 metric 0 mtu 1400 tunnel inet 1.2.3.4 --> 10.20.30.40 inet 10.2.2.250 --> 10.1.1.254 netmask 0xff00 reqid: 104 pf firewall entries are set to allow esp over that tunnel. Now, I do want to route local IPv6 in addition, *if* that is possible, at all. According the manual for if_ipsec(0) should that be possible, if I do understand that combination of "IPv4 and IPv6 traffic" and "over either IPv4 or IPv6" correctly (I am not a native English speaker): https://man.freebsd.org/cgi/man.cgi?query=if_ipsec(4) It can tunnel IPv4 and IPv6 traffic over either IPv4 or IPv6 and secure it with ESP. Sadly, that manual page doesn't provide an IPv6 example ... All of my following attempts failed: 1) adding a second ipsec1 interface connecting the very same IPv4 endpoints: cloned_interfaces="ipsec0 ipsec1" static_routes="tunnel0 tunnel1" create_args_ipsec1="reqid 106" ifconfig_ipsec1="inet fd00:b:b:b::250 fd00:a:a:a::254 tunnel 1.2.3.4 10.20.30.40" route_tunnel1="fd00:a:a:a::/64 fd00:a:a:a::254" Error: route: bad address: fd00:a:a:a:: ifconfig ipsec1: ipsec1: flags=8010 metric 0 mtu 1400 groups: ipsec reqid: 106 Thus, no tunnel and no routing, set. 2) as in 1), besides: route_tunnel1="fd00:a:a:a:: prefixlen 64 fd00:a:a:a::254" No success, same error regarding route. 3) as in 1), besides: ifconfig_ipsec1="inet fd00:b:b:b::250 fd00:a:a:a::254 tunnel 1.2.3.4 10.20.30.40" No success, same error regarding route. 4) setting the routing via route command: /sbin/route add -inet6 default -gateway fd00:a:a:a::254 Error: add net default: gateway fd00:a:a:a::254 fib 0: Invalid argument I am running out of ideas, and Google doesn't come up with relevant answers, at least not for me. Any help, hints, documents are highly appreciated. Thanks and regards, Michael
Re: Howto: ipsec tunnel routing both IPv4 and IPv6? Possible?
Andrey V. Elsukov wrote: > ifconfig_ipsec0_ipv6="inet6 fd00:b:b:b::250 fd00:a:a:a::254 prefixlen 128" Thanks, now do get the tunnel set (after adding the tunnel to your hint): ifconfig_ipsec0="inet 10.2.2.250 10.1.1.254 tunnel 1.2.3.4 10.20.30.40" ifconfig_ipsec0_ipv6="inet6 fd00:b:b:b::250 fd00:a:a:a::254 prefixlen 128 tunnel 1.2.3.4 10.20.30.40" route_tunnel0="10.1.1.0/24 10.1.1.254" route_tunnel0="fd00:a:a:a::/64 fd00:a:a:a::254" ipsec0 (stripped to the relevant part): ipsec0: flags=1008051 metric 0 mtu 1400 tunnel inet 1.2.3.4 --> 10.20.30.40 inet 10.2.2.250 --> 10.1.1.254 netmask 0xff00 inet6 fd00:b:b:b::250 --> fd00:a:a:a::254 prefixlen 128 netstat -rn (stripped to the relevant part): Internet: DestinationGatewayFlags Netif Expire 10.1.1.0/2410.1.1.254 UGS ipsec0 10.1.1.254 link#4 UH ipsec0 10.2.2.250 link#3 UHS lo0 Internet6: Destination Gateway Flags Netif Expire fd00:a:a:a::254 link#4UH ipsec0 fd00:b:b:b::250 link#3UHS lo0 Thus, the IPv6 routing is still missing (error: "route: bad address: fd00:a:a:a::"). Thank you very much, any further help regarding IPv6 routing through the tunnel is very much appreciated. Regards, Michael
Re: Howto: ipsec tunnel routing both IPv4 and IPv6? Possible?
Marek Zarychta wrote: > W dniu 15.01.2024 o 15:35, Michael Grimm pisze: >> route_tunnel0="fd00:a:a:a::/64 fd00:a:a:a::254" > Please try: > route_tunnel0="-6 -net fd00:a:a:a::/64 fd00:a:a:a::254" Bingo! That did the trick: Internet6: Destination Gateway Flags Netif Expire fd00:a:a:a::/64 fd00:a:a:a::254 UGS ipsec0 fd00:a:a:a::254 link#4UH ipsec0 fd00:b:b:b::250 link#3UHS lo0 Thanks to all who helped, and to me: lessons learned ;-) Regards, Michael
Re: Howto: ipsec tunnel routing both IPv4 and IPv6? Possible?
Me wrote: > On 15. Jan 2024, at 16:15, Michael Grimm wrote: > > Marek Zarychta wrote: >> W dniu 15.01.2024 o 15:35, Michael Grimm pisze: > >>> route_tunnel0="fd00:a:a:a::/64 fd00:a:a:a::254" > >> Please try: >> route_tunnel0="-6 -net fd00:a:a:a::/64 fd00:a:a:a::254" > > Bingo! That did the trick: > > Internet6: > Destination Gateway Flags > Netif Expire > fd00:a:a:a::/64 fd00:a:a:a::254 UGS > ipsec0 > fd00:a:a:a::254 link#4UH > ipsec0 > fd00:b:b:b::250 link#3UHS > lo0 That has been a bit premature, because now, the IPv4 routing has been lost. Because when having two identical route_tunnel0= keywords provided, the latter wins. FTR: Here is the final solution: /etc/rc.conf: cloned_interfaces="ipsec0" static_routes="tunnel0 tunnel1" create_args_ipsec0="reqid 104" ifconfig_ipsec0="inet 10.2.2.250 10.1.1.254 tunnel 1.2.3.4 10.20.30.40" ifconfig_ipsec0_ipv6="inet6 fd00:b:b:b::250 fd00:a:a:a::254 prefixlen 128 tunnel 1.2.3.4 10.20.30.40" route_tunnel0="10.1.1.0/24 10.1.1.254" route_tunnel1="-6 -net fd00:a:a:a::/64 fd00:a:a:a::254" ifconfig vtnet0: vtnet0: flags=1008843 metric 0 mtu 1490 tunnel inet 1.2.3.4 --> 10.20.30.40 inet 10.2.2.250 --> 10.1.1.254 netmask 0xff00 inet6 fd00:b:b:b::250 --> fd00:a:a:a::254 prefixlen 128 netstat -rn: Internet: DestinationGatewayFlags Netif Expire 10.1.1.0/2410.1.1.254 UGS ipsec0 10.1.1.254 link#4 UH ipsec0 10.2.2.250 link#3 UHS lo0 Internet6: Destination Gateway Flags Netif Expire fd00:a:a:a::/64 fd00:a:a:a::254 UGS ipsec0 fd00:a:a:a::254 link#4UH ipsec0 fd00:b:b:b::250 link#3UHS lo0 > Thanks to all who helped, and to me: lessons learned ;-) Yeah, Michael
[IPsec] Weird performance issue via IPsec/racoon tunnel
Hi I do run an IPsec/racoon tunnel between two servers (11.1-STABLE #0 r326663). Some days ago I did migrate one of my servers from bare metal to a public cloud instance. Now I do observe weird performance issues from new to old server: ifconfig (OLD server, bare metal): ix0: flags=8843 metric 0 mtu 1500 options=e407bb ifconfig (NEW server, cloud instance): vtnet0: flags=8843 metric 0 mtu 1500 options=6c07bb Immediately after booting of NEW (test file has 10 MB) I do observe the following: #) scp OLD to NEW via ssh/internet: 16.7 MB/s #) scp NEW to OLD via ssh/internet: 17.4 MB/s #) scp NEW to OLD via IPsec tunnel: -> 65.8 KB/s ! #) scp OLD to NEW via IPsec tunnel: 16.5 MB/s Now I do a "ifconfig vtnet0 mtu 1500 up" and can observe very similar performance. *BUT* if I do a "ifconfig vtnet0 mtu 1450 up ; ifconfig vtnet0 mtu 1500 up" I do observe: #) scp NEW to OLD via IPsec tunnel: 17.1 MB/s ! #) scp OLD to NEW via IPsec tunnel: 16.9 MB/s I did monitor "tcpdump -i ix0 -vv esp" at the OLD sever and do get many: 16:22:24.370486 IP (tos 0x8, ttl 64, id 17394, offset 0, flags [none], proto ESP (50), \ length 140, bad cksum 0 (->b110)!) "OLD" > "NEW": ESP(spi=0x0d83dae4,seq=0x3a8d9a), length 120 At the NEW server I do not observe those checksum errors at all. *BUT* I do see these error even after regaining full performance by modifying the MTU from 1500 to 1450 and back to 1500! Well, I do have to admit that I do not have enough knowledge about networking to find out by myself what to debug/modify next. Any help is highly appreciated. Thanks in advance, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: [IPsec] Weird performance issue via IPsec/racoon tunnel
Eugene Grosbein wrote: > 10.12.2017 23:55, Michael Grimm wrote: > "bad cksum 0" is pretty normal for traffic going out via interface supporting > hardware checksum offload, > so kernel skips computing checksum before passing packets to the NIC. Ok, good to know. > Your problem more likely is due to fragmented ESP packets. > It's not uncommon when cloud IP stack or ISP infrastructure drop high > percentage > of fragmented ESP packets because they are not optimized for such packets, > e.g. router has to process them in software instead of hardware > like non-fragmented packets are processed. Thank you for this explanation. I did already lower MTU: If I do configure vtnet0 to a MTU of 1490 at boot time I do not not notice a performance loss compared to the default 1500 setting. >> *BUT* if I do a "ifconfig vtnet0 mtu 1450 up ; ifconfig vtnet0 mtu 1500 up" >> I do observe: >> >> #) scp NEW to OLD via IPsec tunnel: 17.1 MB/s ! >> #) scp OLD to NEW via IPsec tunnel: 16.9 MB/s *BUT* if I do boot with the default 1500 setting, changing the MTU to e.g. 1450 and *immediately* back to 1500 manually, I do not encounter any performance loss at all. Why? Even when booting 1490 and immediately setting the MTU manually to 1500 I do not see any performance loss. Strange. > When you lower MTU of vtnet enough to make encapsulated packets > (payload+overhead) <=1500 bytes, > resulted ESP packets have not be fragmented and pass just fine. I will keep the MTU at 1490 and monitor that server for the time being. > To verify if it's your case, you should run two tcpdump commands, > one at sending side and another at receiving size > and compare outputs to see if *every* outgoing packet reaches its destination > or not. Hmm, how would one check that? The output is to fast for me ;-) Seriously, how should one check this? Thanks for your help, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: [IPsec] Weird performance issue via IPsec/racoon tunnel
Eugene Grosbein wrote: > 11.12.2017 2:54, Michael Grimm wrote: >> *BUT* if I do boot with the default 1500 setting, >> changing the MTU to e.g. 1450 and *immediately* back to 1500 manually, >> I do not encounter any performance loss at all. Why? >> Even when booting 1490 and immediately setting the MTU manually to 1500 I do >> not see any performance loss. Strange. > > Interface MTU is used to assing 'mtu' attribute to corresponding route in the > system routing table. > Lowering interface MTU lowers route mtu, but raising interface MTU does *not* > raises route mtu, > use "route -n get" command to check it out. So, you still use low mtu really. Bingo! NEW> ifconfig vtnet0 vtnet0: flags=8843 metric 0 mtu 1490 NEW> route -n get freebsd.org ... recvpipe sendpipe ssthresh rtt,msecmtuweightexpire 0 0 0 0 1490 1 0 NEW> ifconfig vtnet0 mtu 1500 up NEW> ifconfig vtnet0 vtnet0: flags=8843 metric 0 mtu 1500 NEW> route -n get spiegel.de ... recvpipe sendpipe ssthresh rtt,msecmtuweightexpire 0 0 0 0 1490 1 0 I didn't know that. And that explains all my observations. >> Hmm, how would one check that? The output is to fast for me ;-) Seriously, >> how should one check this? > > With your eyes :-) Use tcpdump -c flag to limit number of lines, redirect > output to a file > and carefully compare some packets using their ID that tcpshow shows. Ok. I will do that at some later time ;-) I'd like to thank you again for your input and with kind regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
performance issue within VNET jail
Hi [ I did recently migrate my servers from bare metal to cloud instances (OpenStack at OVH) ] [ FreeBSD 11.1-STABLE #0 r327055 ] My setup is as follows and didn't change for the last couple of years: extIF0/pf/NAT <—> epairXa (bridge0) epairXb <-> jail Downloading a file (by wget) at the host is around 30 MB/s, and an example tcpdump at extIF0 looks as follows: 19:32:10.711769 IP (tos 0x20, ttl 56, id 37539, offset 0, flags [DF], proto TCP (6), length 8680) remote.http > myhost.14367: Flags [.], cksum 0x64ed (incorrect -> 0x3223), seq 5753:14381, ack 146, win 235, options [nop,nop,TS val 1007145732 ecr 3995852], length 8628: HTTP 19:32:10.713851 IP (tos 0x20, ttl 56, id 37545, offset 0, flags [DF], proto TCP (6), length 1490) remote.http > myhost.14367: Flags [.], cksum 0x48d7 (incorrect -> 0x8d1e), seq 14381:15819, ack 146, win 235, options [nop,nop,TS val 1007145732 ecr 3995852], length 1438: HTTP 19:32:10.713899 IP (tos 0x20, ttl 56, id 37546, offset 0, flags [DF], proto TCP (6), length 1490) remote.http > myhost.14367: Flags [.], cksum 0x48d7 (incorrect -> 0x6ade), seq 15819:17257, ack 146, win 235, options [nop,nop,TS val 1007145732 ecr 3995852], length 1438: HTTP 19:32:10.713934 IP (tos 0x20, ttl 56, id 37547, offset 0, flags [DF], proto TCP (6), length 1490) remote.http > myhost.14367: Flags [.], cksum 0x48d7 (incorrect -> 0x1173), seq 17257:18695, ack 146, win 235, options [nop,nop,TS val 1007145732 ecr 3995852], length 1438: HTTP 19:32:10.713962 IP (tos 0x20, ttl 56, id 37548, offset 0, flags [DF], proto TCP (6), length 1490) remote.http > myhost.14367: Flags [.], cksum 0x48d7 (incorrect -> 0xcf7a), seq 18695:20133, ack 146, win 235, options [nop,nop,TS val 1007145732 ecr 3995852], length 1438: HTTP When downloading the very same file within a VIMAGE jail the performance drops to around 80 KB/s, quite a dramatic loss. An example tcpdump at exitIF0 looks as follows: 19:34:36.284175 IP (tos 0x0, ttl 56, id 28618, offset 0, flags [DF], proto TCP (6), length 2948) remote.http > myhost.63382: Flags [.], cksum 0x5df6 (incorrect -> 0x4478), seq 1449:4345, ack 146, win 235, options [nop,nop,TS val 1007182125 ecr 4141429], length 2896: HTTP 19:34:36.481904 IP (tos 0x0, ttl 56, id 28620, offset 0, flags [DF], proto TCP (6), length 1500) remote.http > myhost.63382: Flags [.], cksum 0xd11d (correct), seq 1449:2897, ack 146, win 235, options [nop,nop,TS val 1007182175 ecr 4141429], length 1448: HTTP 19:34:36.484109 IP (tos 0x0, ttl 56, id 28621, offset 0, flags [DF], proto TCP (6), length 2948) remote.http > myhost.63382: Flags [.], cksum 0x5df6 (incorrect -> 0x2e5b), seq 15929:18825, ack 146, win 235, options [nop,nop,TS val 1007182175 ecr 4141629], length 2896: HTTP 19:34:36.682006 IP (tos 0x0, ttl 56, id 28623, offset 0, flags [DF], proto TCP (6), length 1500) remote.http > myhost.63382: Flags [.], cksum 0x4ab6 (correct), seq 2897:4345, ack 146, win 235, options [nop,nop,TS val 1007182225 ecr 4141629], length 1448: HTTP 19:34:36.684159 IP (tos 0x0, ttl 56, id 28624, offset 0, flags [DF], proto TCP (6), length 2948) remote.http > myhost.63382: Flags [.], cksum 0x5df6 (incorrect -> 0xd7db), seq 18825:21721, ack 146, win 235, options [nop,nop,TS val 1007182225 ecr 4141829], length 2896: HTTP A tcpdump at epairXa looks comparable. I did reduce all MTU settings at the involved interfaces from their initial settings (1490) to an experimental setting of 1400, just to be on the save side, to no avail. (FYI: I did have to reduce from 1500 to 1490 to please IPSec after migration from bare metal to cloud infrastructure.) Then, I did test the following settings found in the Net, to no avail either: sysctl net.inet.tcp.tso=0 sysctl net.link.bridge.pfil_onlyip=0 sysctl net.link.bridge.pfil_bridge=0 sysctl net.link.bridge.pfil_member=0 sysctl net.add_addr_allfibs=0 I do have to admit that I am lost here, and that I cannot think about what is going wrong. The last download I did try at my old severs has been some weeks ago. Ever since I did upgrade FreeBSD 11.1-STABLE, and I did move my infrastructure from bare metal to cloud, thus I cannot test anymore if my old servers would have shown that performance issue in the meantime. Thus any feedback is highly recommended! Thanks in advance and regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: performance issue within VNET jail
Kristof Provost wrote: > > On 21 Dec 2017, at 21:24, Michael Grimm wrote: >> I do have to admit that I am lost here, and that I cannot think about what >> is going wrong. The last download I did try at my old severs has been some >> weeks ago. Ever since I did upgrade FreeBSD 11.1-STABLE, and I did move my >> infrastructure from bare metal to cloud, thus I cannot test anymore if my >> old servers would have shown that performance issue in the meantime. >> >> Thus any feedback is highly recommended! > Can you try turning off TSO? (`ifconfig $ifname -tso`) > > There have been issues with pf and TSO checksums, which looked a lot like > this (i.e. bad TCP performance). Those problems should be fixed, but this is > easy to test. > I did try it, but without success. This only worked for the external interface, though. Both epairX interfaces didn't accept that command: ifconfig: -tso: Invalid argument I did mention that I previously tried "sysctl net.inet.tcp.tso=0". That shoukld do the same, right? Thanks and regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: performance issue within VNET jail
Kristof Provost wrote > On 21 Dec 2017, at 21:50, Michael Grimm wrote: >> Kristof Provost wrote: >>> Can you try turning off TSO? (`ifconfig $ifname -tso`) >>> >>> There have been issues with pf and TSO checksums, which looked a lot like >>> this (i.e. bad TCP performance). Those problems should be fixed, but this >>> is easy to test. >> I did try it, but without success. > Hmm. I’ve got no ideas at the moment. I run a very similar setup (although on > CURRENT), and see no performance issues from my jails. > Can you test a performance test without pf? Perhaps from the local LAN for > example? That should help narrow it down a bit, at least. Well I prepared on of my webservers running at hostB/jailX to serve a sample file for local downloading tests: 1) hostAwget from hostB/jailX sample file: about 30 MB/s 2) hostA/jailY wget from hostB/jailX sample file: about 30 MB/s 3) hostBwget from hostB/jailX sample file: about 190 MB/s 4) hostB/jailY wget from hostB/jailX sample file: about 190 MB/s Hmm. At least tests 3) and 4) omit the pf firewall. Tests 1) qnd 2) include passing two firewalls, one at each host. BUT: Both hosts are connected via an IPSec tunnel, and that's esp not tcp. Can anyone draw conclusions from this test? I cannot ;-) Thanks and regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: performance issue within VNET jail
> On 21. Dec 2017, at 22:48, Eugene Grosbein wrote: > > 22.12.2017 4:42, Michael Grimm wrote: > >> Well I prepared on of my webservers running at hostB/jailX to serve a sample >> file for local downloading tests: >> >> 1) hostA wget from hostB/jailX sample file: about 30 MB/s >> 2) hostA/jailY wget from hostB/jailX sample file: about 30 MB/s >> 3) hostB wget from hostB/jailX sample file: about 190 MB/s >> 4) hostB/jailY wget from hostB/jailX sample file: about 190 MB/s >> >> Hmm. At least tests 3) and 4) omit the pf firewall. Tests 1) qnd 2) include >> passing two firewalls, one at each host. BUT: Both hosts are connected via >> an IPSec tunnel, and that's esp not tcp. >> >> Can anyone draw conclusions from this test? >> I cannot ;-) > > Make sure and double check that your ESP packets do not get fragmented. Hmm, I do not know how to achieve that. May the following tcpdump excerpts answer your question, or do you want me to look somewhere else? At hostA while downloading from hostB/jailX and "tcpdump -i extIF esp -vv" 22:52:42.341023 IP (tos 0x0, ttl 64, id 40481, offset 0, flags [none], proto ESP (50), length 140) hostA > hostB: ESP(spi=0x01d9ec34,seq=0x5fe699), length 120 22:52:42.341079 IP (tos 0x0, ttl 53, id 64310, offset 1480, flags [none], proto ESP (50), length 100) hostB > hostA: ip-proto-50 22:52:42.341151 IP (tos 0x0, ttl 64, id 40483, offset 0, flags [none], proto ESP (50), length 140) hostA > hostB: ESP(spi=0x01d9ec34,seq=0x5fe69a), length 120 22:52:42.341169 IP (tos 0x0, ttl 53, id 64312, offset 1480, flags [none], proto ESP (50), length 100) hostB > hostA: ip-proto-50 22:52:42.341238 IP (tos 0x0, ttl 53, id 64314, offset 1480, flags [none], proto ESP (50), length 100) hostB > hostA: ip-proto-50 At hostB the same dump looks like: 22:52:42.463511 IP (tos 0x0, ttl 53, id 41153, offset 0, flags [none], proto ESP (50), length 124) hostA > hostB: ESP(spi=0x01d9ec34,seq=0x5feaa8), length 104 22:52:42.463518 IP (tos 0x0, ttl 53, id 41155, offset 0, flags [none], proto ESP (50), length 124) hostA > hostB: ESP(spi=0x01d9ec34,seq=0x5feaa9), length 104 22:52:42.463593 IP (tos 0x0, ttl 53, id 41157, offset 0, flags [none], proto ESP (50), length 124) hostA > hostB: ESP(spi=0x01d9ec34,seq=0x5feaaa), length 104 22:52:42.463601 IP (tos 0x0, ttl 53, id 41159, offset 0, flags [none], proto ESP (50), length 124) hostA > hostB: ESP(spi=0x01d9ec34,seq=0x5feaab), length 104 22:52:42.463673 IP (tos 0x0, ttl 53, id 41161, offset 0, flags [none], proto ESP (50), length 124) hostA > hostB: ESP(spi=0x01d9ec34,seq=0x5feaac), length 104 Thanks and regards, Michael > > > ___ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org" ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: performance issue within VNET jail
Eugene Grosbein wrote: > 22.12.2017 4:59, Michael Grimm wrote: >>> Make sure and double check that your ESP packets do not get fragmented. >> >> >> Hmm, I do not know how to achieve that. May the following tcpdump excerpts >> answer your question, or do you want me to look somewhere else? >> >> At hostA while downloading from hostB/jailX and "tcpdump -i extIF esp -vv" >> >> 22:52:42.341023 IP (tos 0x0, ttl 64, id 40481, offset 0, flags [none], proto >> ESP (50), length 140) >>hostA > hostB: ESP(spi=0x01d9ec34,seq=0x5fe699), length 120 >> 22:52:42.341079 IP (tos 0x0, ttl 53, id 64310, offset 1480, flags [none], >> proto ESP (50), length 100) >>hostB > hostA: ip-proto-50 > > It shows non-zero offsets, so your ESP packets *are* fragmented. > I guess, this is the reason of your problems as fragmented ESP packets are > known to cause problems > due to different reasons. Simpliest way to avoid such issues is to decrease > MTU of IPSEC tunnel > and/or TCP MSS so that incapsulated ESP packets do not get fragmented. Well, you already helped me out with IPSEC very recently, and I already did decrease my MTU from 1500 to 1490. That increased my tunnel performance dramatically, already. Thanks, I will decrease MTU further. BUT: In this thread I did report that I already had decreased MTU for testing purposes on all involved interfaces down to 1400 to no avail, and that my performance issue is regarding downloads within VNET jails using TCP, not ESP. The very same external interfaces do not show a performance drop if connected via ESP tunnel, but when trying to download files from the internet, and only when the download is started within a VNET jail. At the host downloads are only limited by the bandwidth provided by the hosting company. BUT: It might well be that I did completely misunderstood your reply instead ;-) Thanks and regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: performance issue within VNET jail
Kristof Provost wrote: > I run a very similar setup (although on CURRENT), and see no performance > issues from my jails. In utter despair I did upgrade one server to CURRENT (#327076) today, but that hasn't been successful :-( Ok, right now I do know: (#) there is *no* performance loss (TCP) when: (-) fetching files from outside through PF/extIF to host (-) fetching files from partner server host via IPSEC tunnel bound to extIF (ESP) to host (-) fetching files from partner server host via IPSEC tunnel bound to extIF (ESP) to jail via bridge (-) fetching files from partner server jail via bridge and then via IPSEC tunnel bound to extIF (ESP) to host (-) fetching files from partner server jail via bridge and then via IPSEC tunnel bound to extIF (ESP) and then via bridge to jail (#) there is a *dramatic* performance loss (TCP) when: (-) fetching files from outside through PF/extIF via bridge to jail (#) I did try to tweak the following settings *without* success: (-) sysctl net.inet.tcp.tso=0 (-) sysctl net.link.bridge.pfil_onlyip=0 (-) sysctl net.link.bridge.pfil_bridge=0 (-) sysctl net.link.bridge.pfil_member=0 (-) reducing mtu to 1400 (1490 before) on all interfaces extIF, bridge, epairXs (-) deactivating "scrub in all" and "scrub out on $extIF all random-id" in /etc/pf.conf (-) setting "set require-order yes" and "set require-order no" in /etc/pf.conf [1] [1] I do see more a lot of out-of-order packages within a jail "netstat -s -p tcp" after those slow downloads, but not after downloads via IPSEC tunnel from partner host. That leads me to the conclusions: (#) the bridge is not to blame (#) it's either the PF/NATing or something else, right? Thanks for your suggestions so far, but I am lost here. Any ideas? Regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: performance issue within VNET jail
Hi — [ I am including freebsd...@freebsd.org now and removing freebsd-j...@freebsd.org ] [ Thread starts at https://lists.freebsd.org/pipermail/freebsd-net/2017-December/049470.html ] Eugene Grosbein wrote: > Michael Grimm wrote: >> Kristof Provost wrote: >>> I run a very similar setup (although on CURRENT), and see no performance >>> issues from my jails. >> >> In utter despair I did upgrade one server to CURRENT (#327076) today, but >> that hasn't been successful :-( >> >> Ok, right now I do know: >> >> (#) there is *no* performance loss (TCP) when: >> >> (-) fetching files from outside through PF/extIF to host >> (-) fetching files from partner server host via IPSEC tunnel bound to >> extIF (ESP) to host >> (-) fetching files from partner server host via IPSEC tunnel bound to >> extIF (ESP) to jail via bridge >> (-) fetching files from partner server jail via bridge and then via >> IPSEC tunnel bound to extIF (ESP) to host >> (-) fetching files from partner server jail via bridge and then via >> IPSEC tunnel bound to extIF (ESP) and then via bridge to jail >> >> (#) there is a *dramatic* performance loss (TCP) when: >> >> (-) fetching files from outside through PF/extIF via bridge to jail >> >> (#) I did try to tweak the following settings *without* success: >> >> (-) sysctl net.inet.tcp.tso=0 >> (-) sysctl net.link.bridge.pfil_onlyip=0 >> (-) sysctl net.link.bridge.pfil_bridge=0 >> (-) sysctl net.link.bridge.pfil_member=0 >> (-) reducing mtu to 1400 (1490 before) on all interfaces extIF, bridge, >> epairXs >> (-) deactivating "scrub in all" and "scrub out on $extIF all random-id" >> in /etc/pf.conf >> (-) setting "set require-order yes" and "set require-order no" in >> /etc/pf.conf [1] >> >> [1] I do see more a lot of out-of-order packages within a jail "netstat -s >> -p tcp" after those slow downloads, but not after downloads via IPSEC tunnel >> from partner host. >> >> That leads me to the conclusions: >> >> (#) the bridge is not to blame >> (#) it's either the PF/NATing or something else, right? >> >> Thanks for your suggestions so far, but I am lost here. Any ideas? > > It seems to me some kind of bug in the PF. > I personally never tried it, I use ipfw and it works just fine. Before testing IPFW (which I have never used before) I'd like to ask the experts in freebsd...@freebsd.org about possible tests/tweaks regarding PF. Thanks to all involved so far and regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: [SOLVED] performance issue within VNET jail
Bjoern A. Zeeb wrote: > > On 22 Dec 2017, at 20:30, Michael Grimm wrote: >> Hi — >> >> [ I am including freebsd...@freebsd.org now and removing >> freebsd-j...@freebsd.org ] >> [ Thread starts at >> https://lists.freebsd.org/pipermail/freebsd-net/2017-December/049470.html ] >>>> >>>> (#) there is a *dramatic* performance loss (TCP) when: >>>> >>>>(-) fetching files from outside through PF/extIF via bridge to jail > … >>>> >>>> Thanks for your suggestions so far, but I am lost here. Any ideas? >>> >>> It seems to me some kind of bug in the PF. >>> I personally never tried it, I use ipfw and it works just fine. >> >> Before testing IPFW (which I have never used before) I'd like to ask the >> experts in freebsd...@freebsd.org about possible tests/tweaks regarding PF. > > OK, too complicated setups; I am not getting it fully. ;-) > Can you please just describe the one case that doesn’t work well in all > detail and ignore all the others for a moment? > > (a) what’s the external host interface? vtnet > (b) pf runs on the base system? yes > (c) you are bridging into a VNET-jail? How exactly? Are you bridging to > epairs? yes, I am bridging epairs > (d) where exactly are you NATing? I am NATing IPv4 and IPv6 at the host' PF ffirewall > (e) why are you bridging and NATing? That makes little sense to me. > Couldn’t you NAT and forward or just bridge? hmm, that has been developed by myself over the years. I do "consider" my jails as jails with their own network stack, like isolated "VM". > (f) what’s inside the VNET jail? Another pf or anything? no more firewall, my jails are merely service jails (dns, mail, web, …) > (g) out of curiosity, does dmesg on the base system indicate anything? No. > To understand your performance problem better: > > (1) you are doing a fetch of a rather large file to test from within the VNET > jail? Or what are you fetching? Are you using fetch? yes, I do something like the following with the jail: wget https://download.freebsd.org/ftp/releases/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-bootonly.iso -O /dev/null > (2) if you fetch from within the same VNET jail does that perform? > (3) if you fetch something to the VNET jail from the base system just going > through your internal setup but not leaving the machine, does that still > perform? > (4) if you fetch something to the VNET jail from the same LAN (if possible to > test) does that perform? > (5) if you fetch something to the VNET jail from a close by location does > that make a difference to something on the other side of the planet? I will skip these questions for the time being, because I did solve my issue 15 minutes before your mail ;-) And I feel sorry for all your now "wasted" efforts in trying to help me. As I am using vtnet interface in a cloud environment (Public Cloud by OVH) I did read the vtnet(4) man pages and stumbled about "LOADER TUNABLES" like: hw.vtnet.lro_disable hw.vtnet.X.lro_disable This tunable disables LRO. The default value is 0. Well, without knowing and understanding the implications of those loader tunables I did disabled them step by step, and bingo, setting … hw.vtnet.lro_disable="1" … in /boot/loader.conf" and performance is back from KB/s to MB/s. I really do not understand what I have done and why it is working and whether that will have negative implications for my servers. Perhaps someone of you experts could help me understand it. Because I am leaving in some hours for Xmas vacations, I won't be able to come back to this issue for some days now. I'd like to thank all of you for your patience and help, and: Merry Christmas and with kind regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: [SOLVED] performance issue within VNET jail
Hi, let me come back to this issue I did report end of last year: https://lists.freebsd.org/pipermail/freebsd-net/2017-December/049470.html My setup: vtnet/pf-NAT <—> epairXa (bridge0) epairXb <-> vnet jail My observations regarding a sample download like "wget https://download.freebsd.org/ftp/releases/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-bootonly.iso -O /dev/null" (-) expected performance at the host of about 30 MB/s (-) dramatic loss of performance inside a vnet jail down to about 80 KB/s My solution: adding 'hw.vtnet.lro_disable="1"' to /boot/loader.conf In the meantime I did find a comparable reference regarding Linux: "Previously, network drivers that had Large Receive Offload (LRO) enabled by default caused the system to run slow, lose frame, and eventually prevent communication, when using software bridging. With this update, LRO is automatically disabled by the kernel on systems with a bridged configuration, thus preventing this bug." https://bugzilla.redhat.com/show_bug.cgi?id=772317 I do not have the knowledge to judge if that should be disabled in FreeBSD as well when software bridging is involved, thus I just want to let you know. (And that's the reason that I have included the author of the vnet driver in CC.) With kind regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Performance issues with VNET/bridge/VLAN
Am 2019-02-22 11:31, schrieb Patrick M. Hausen: [x-posted to freebsd-j...@freebsd.org] The machine is an iocage jail host, all jails with VNET. The problem is: network performance in the jails (not on the host!) is abysmal with the second setup. Not consistently so, everything *seems* to work but e.g. a customer complained that checking out a project from github happend at 15k/s … that’s when we started to investigate. [...] *Any* idea what might be going on here? We use VNET all the same on all the hosts and it is still labelled „experimental", yes. But all the parts that make up the different setups - bridge(4), vlan(4) - have been in FreeBSD for ages. I’m just combining features orthogonally like every good sysadmin ;-) If someone is willing to do some investigation, I think I can provide a test system and remote access … This sounds familiar to me, please have a look at the following two threads: https://lists.freebsd.org/pipermail/freebsd-jail/2019-February/003684.html https://lists.freebsd.org/pipermail/freebsd-net/2017-December/049470.html If your hosts run on cloud infrastructure odds are that the mentioned settings will work in your case. Regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Performance issues with VNET/bridge/VLAN
Hi On 22. Feb 2019, at 19:48, Patrick M. Hausen wrote: > epair(4) interfaces added to the bridge These are my number one suspects when it comes to performance loss within a VNET jail compared to the host system. > But I’ll fiddle with LRO nonetheless and report if that changes anything. I'm interested to learn if bare metal might behave comparable to cloud infrastructure or not? Regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
ipsec tunnel and vnet jails: routing, howto?
Hi, I am currently stuck, somehow, and I do need your input. Thus, let me explain, what I do want to achieve: I do have two servers connected via an ipsec/tunnel ... [A] dead:beef:1234:abcd::1 <—> dead:feed:abcd:1234::1 [B] … which is sending all traffic destined for dead:beef:1234:abcd::/64 and dead:feed:abcd:1234::/64 through the tunnel, and vice versa. That did run perfectly well during the last years until I decided to give VNET jails a try. Previously, some of my old fashioned jails got an IPv6 address attached like dead:beef:1234:abcd:1:2::3, and I could reach that address from the remote server without any routing/re-directing or alike, necessary. Now, after having moved those jails to VNET jails (having those addresses bound to their epairXXb interfaces), I cannot reach those addresses within those jails any longer. From my point of view and understanding this must have to do with lack of proper routing, but I am not sure, if that is correct, thus my questions to the experts: 1) Is my assumption correct, that my tunnel is "ending" after having passed my firewalls at each server, *bevor* decrypting its ESP traffic into its final destination (yes, I do have pf rules to allow for esp traffic to pass my outer internet facing interface)? 2) If that is true, racoon has to decide where to deliver those packets, finally? 3) If that is true, I do have an issue with routing that *cannot* be solved by pf firewall rules, right? 4) If that is true, what do I have to look for? What am I missing? How can I route incoming and finally decrypted traffic to its final destination within a VNET jail? 5) Do I need to look for a completely different approach? Every hint is highly welcome. Thanks in advance and with kind regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: ipsec tunnel and vnet jails: routing, howto?
Julian Elischer wrote: > > On 27/12/2015 4:24 AM, Michael Grimm wrote: >> I am currently stuck, somehow, and I do need your input. Thus, let me >> explain, what I do want to achieve: >> >> I do have two servers connected via an ipsec/tunnel ... >> [A] dead:beef:1234:abcd::1 <—> dead:feed:abcd:1234::1 [B] >> … which is sending all traffic destined for dead:beef:1234:abcd::/64 and >> dead:feed:abcd:1234::/64 through the tunnel, and vice versa. >> >> That did run perfectly well during the last years until I decided to give >> VNET jails a try. Previously, some of my old fashioned jails got an IPv6 >> address attached like dead:beef:1234:abcd:1:2::3, and I could reach that >> address from the remote server without any routing/re-directing or alike, >> necessary. Now, after having moved those jails to VNET jails (having those >> addresses bound to their epairXXb interfaces), I cannot reach those >> addresses within those jails any longer. >> >> >From my point of view and understanding this must have to do with lack of >> >proper routing, but I am not sure, if that is correct, thus my questions to >> >the experts: >> >> 1) Is my assumption correct, that my tunnel is "ending" after having passed >> my firewalls at each server, *bevor* decrypting its ESP traffic into its >> final destination (yes, I do have pf rules to allow for esp traffic to pass >> my outer internet facing interface)? >> >> 2) If that is true, racoon has to decide where to deliver those packets, >> finally? >> >> 3) If that is true, I do have an issue with routing that *cannot* be solved >> by pf firewall rules, right? >> >> 4) If that is true, what do I have to look for? What am I missing? How can I >> route incoming and finally decrypted traffic to its final destination within >> a VNET jail? >> >> 5) Do I need to look for a completely different approach? Every hint is >> highly welcome. > > basically you have to treat the jails as if they are totally separate > machines that are reached through the vpn endpoints instead of being the > endpoints themselves. > This will require a different setup. for example your tunnel will need to be > exactly that a tunnel and not just an encapsulation. And you will need full > routing information for the other end at each end. Thanks for your input. In the meantime I got it running, somehow. The "somehow" refers to: I am not sure if that's the way its supposed to be. What I did (I do only show the part of host [A], the other host is configured accordingly): 1. ipsec/tunnel between [A] dead:beef:1234:abcd::1 <—> dead:feed:abcd:1234::1 [B] /path-to-racoon/setkey.conf: spdadd dead:beef:1234:abcd::/56 dead:feed:abcd:1234:1:2::3 any -P out ipsec esp/tunnel/dead:beef:1234:abcd::1-dead:feed:abcd:1234::1/require; spdadd dead:feed:abcd:1234::/56 dead:beef:1234:abcd:1:2::3 any -P in ipsec esp/tunnel/dead:feed:abcd:1234::1-dead:beef:1234:abcd::1/require; 2. routing at [A]: /etc/rc.conf: ipv6_static_routes="jail1" # that's for the route from host system [A] into jail1 with IPv6 address of fd00::::::1 —> ipv6_route_mail="-host dead:beef:1234:abcd:1:2::3 -host fd00::::::1" /etc/jail.conf: # # host dependent global settings # $ip6prefix = "dead:beef:1234:abcd"; $ip6prefix_remote_host = "dead:feed:abcd:1234"; # # global jail settings # host.hostname= "${name}"; path = "/usr/home/jails/${name}"; mount.fstab = "/etc/fstab.${name}"; exec.consolelog = "/var/log/jail_${name}_console.log"; vnet = "new"; vnet.interface = "epair${jailID}b"; exec.clean; mount.devfs; persist; # # network settings to apply/destroy during start/stop of every jail # exec.prestart= "sleep 2"; exec.prestart += "ifconfig epair${jailID} create up"; exec.prestart += "ifconfig bridge0 addm epair${jailID}a"; exec.start = "/sbin/ifconfig lo0 127.0.0.1 up"; exec.start += "/sbin/ifconfig epair${jailID}b inet ${ip4_addr}"; exec.start += "/sbin/ifconfig epair${jailID}b inet6 ${ip6_addr}"; exec.start += "/sbin/route add default -gateway 10.x.x.254"; exec.sta
How to define outgoing IP address? Needed to route local traffic through IPSEC tunnel.
Hi — Is there a way to set the default outgoing IPv6 address of a network interface? To my understanding the IPv6 address is used that is bound to the interface by ifconfig_IFNAME_ipv6, right? I need to route all my traffic to a remote server via an IPSEC tunnel (racoon) that has a setkey.conf as follows: spdadd fd00:1234:1234:1234::/64 fd00:abcd:abcd:abcd::/64 any -P out ipsec esp/tunnel/2001:dead:beaf:::a-2001:dead:beaf:::a/require; spdadd fd00:abcd:abcd:abcd::/64 fd00:1234:1234:1234::/64 any -P in ipsec esp/tunnel/2001:dead:beaf:::a-2001:dead:beaf:::a/require; I can use that tunnel from my jails because they have addresses from the fd00:1234:1234:1234::/64 or fd00:abcd:abcd:abcd::/64 address space bound to their epairXb interfaces. But, my hosts have addresses from 2001:dead:beaf:::/56 or 2001:dead:beaf:::/56 respectively. And, here my tunnel won't work. I did try to set a local address to ifconfig_IFNAME_ipv6, though. But then the host is working, but the jails are failing to route through the tunnel. I did try to add to my setkey.conf: spdadd 2001:dead:beaf:::/56 fd00:abcd:abcd:abcd::/64 any -P out ipsec esp/tunnel/2001:dead:beaf:::a-2001:dead:beaf:::a/require; spdadd 2001:dead:beaf:::/56 fd00:1234:1234:1234::/64 any -P in ipsec esp/tunnel/2001:dead:beaf:::a-2001:dead:beaf:::a/require; But that doesn't work either. Every help is highly welcome and thanks in advance. Regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
IPSec tunnel, VNET jail and routing issue
Hi -- I am referring to the following (simplified) setup: [hostA /ix0 / 2001:dead::1 / 1.2.3.4] <= IPsec tunnel => [hostB / ix0 / 2001:beef::10 / 10.20.30.40] || || [jail1 / bridge0 / fd00:a::1 / 10.1.1.1] [jail1 / bridge0 / fd00:b::2 / 10.2.2.2] All my jails are VNET jails, that use the bridge0 (epair) device. Thus, all IPv4 and IPv6 addresses of my local networks an A and B are bound to the bridge0 interface! But, the IPsec tunnel (via racoon) is anchored at public IPv4 addresses on ix0 at both hosts. Task: route all local traffic from hostA to hostB via the tunnel. Working: IPv6 traffic is running fine, meaning, that I can reach every jail from every host. That has been working for years. Issue: I recently wanted to extend my setup to local IPv4 addresses of my jails, and failed miserably. Configuration (shown for hostA, only): setkey.conf # hostA hostB hostA hostB spdadd fd00:a::/64fd00:b::/64 any -P out ipsec esp/tunnel/1.2.3.4-10.20.30.40/require; spdadd fd00:a::/642001:beef::/56 any -P out ipsec esp/tunnel/1.2.3.4-10.20.30.40/require; spdadd 2001:dead::/56 fd00:b::/64 any -P out ipsec esp/tunnel/1.2.3.4-10.20.30.40/require; # hostB hostA hostB hostA spdadd fd00:b::/64fd00:a::/64 any -P in ipsec esp/tunnel/10.20.30.40-1.2.3.4/require; spdadd fd00:b::/642001:dead::/56 any -P in ipsec esp/tunnel/10.20.30.40-1.2.3.4/require; spdadd 2001:beef::/56 fd00:a::/64 any -P in ipsec esp/tunnel/10.20.30.40-1.2.3.4/require; # hostA hostB hostA hostB spdadd 10.1.1.0/2410.2.2.0/24 any -P out ipsec esp/tunnel/1.2.3.4-10.20.30.40/require; spdadd 10.1.1.0/2410.20.30.40 any -P out ipsec esp/tunnel/1.2.3.4-10.20.30.40/require; spdadd 1.2.3.410.2.2.0/24 any -P out ipsec esp/tunnel/1.2.3.4-10.20.30.40/require; # hostB hostA hostB hostA spdadd 10.2.2.0/2410.1.1.0/24 any -P in ipsec esp/tunnel/10.20.30.40-1.2.3.4/require; spdadd 10.2.2.0/241.2.3.4 any -P in ipsec esp/tunnel/10.20.30.40-1.2.3.4/require; spdadd 10.20.30.4010.1.1.0/24 any -P in ipsec esp/tunnel/10.20.30.40-1.2.3.4/require; There is no specific routing regarding the tunnel defined. All should be done by this spdadd's. Achieved sofar: #) I can reach each jail at the other site from the host. #) Allowing arpproxy_all="YES" will satisfy ARP (MACs from opposite VNET jails will become assigned). I do not know if that is needed, but now ping from jails to the opposite jails will at least start to send ICMP packages. Unsolved issue: I cannot reach opposite jails from another host's jail; e.g.: ping 10.20.30.40 in jail1@hostA will not work. Observations so far: #) tcpdump shows for "ping 10.2.2.2 in jail1@hostA" ICMP traffic at the bridge0 at hostA: IP 10.1.1.1 > 10.2.2.2: ICMP echo request, id 20099, seq 0, length 64 and at bridge0 at hostB: IP 10.1.1.1 > 10.2.2.2: ICMP echo request, id 15233, seq 6, length 64 IP 10.2.2.2 > 10.1.1.1: ICMP echo reply, id 15233, seq 6, length 64 Hmm: hostA doesn't get an echo replay, although hostB did send one. #) tcpdump shows for "ping 10.2.2.2 at hostA" *no* ICMP traffic at hostA@bridge0 or hostA@ix0 but ICMP traffic at hostB@bridge0: IP 1.2.3.4 > 10.2.2.2: ICMP echo request, id 60543, seq 0, length 64 IP 10.2.2.2 > 1.2.3.4: ICMP echo reply, id 60543, seq 0, length 64 Hmm: it's working. #) It looks to me as if the tunnel does not recognise "spdadd 10.1.1.0/24 10.2.2.0/24" and vice versa settings because those IPs are bound to the bridge. #) Whenever an IP bound to ix0 is involved (host to jail) the corresponding spdadd parts are recognised. #) adding static routes like "add route 10.2.2.0/24 1.2.3.4" and alike do not solve my issue. Questions: #) Is this an issue with IPsec/racoon? #) Is this a routing issue? #) Why does IPv6 address space work (identical configuration regarding jails, firewalling, routing, et al.) #) Any other idea? Sorry for this lengthy post, and any feedback is highly welcome, Michael ___
[SOLVED] IPSec tunnel, VNET jail and routing issue
Michael Grimm wrote: Nevermind, I solved my issue. I has been a minor typo with major consequences. > Configuration (shown for hostA, only): > > setkey.conf > # hostA hostB > hostA hostB > spdadd 10.1.1.0/2410.2.2.0/24 any -P out ipsec > esp/tunnel/1.2.3.4-10.20.30.40/require; Contrarily to this example line above, my real setkey.conf has had an "in" instead of "out" :-( > Achieved sofar: > > #) Allowing arpproxy_all="YES" will satisfy ARP (MACs from opposite > VNET jails will become assigned). >I do not know if that is needed, but now ping from jails to the > opposite jails will at least start to send ICMP packages. Now I have to state: yes, ARP proxying is mandatory in my setup. Hmm, I need to learn more about ARP. Because now I do observe a lot of lines like … | mike kernel: arp: proxy: ignoring request from 10.1.1.1 via epair1a … and I do not know if I do have to be concerned about those. Do I? Sorry for the noise! Regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
12.2-STABLE: Commit 367740 breaks IMAP/SMTP server authentication
Hi, I am running 12.2-STABLE and VNET jails, one of which host a recent Dovecot IMAP and a recent postfix SMTP server. Authentication is forced via TLS/SSL for both services (ports 587 and 993). Setup is as follows: extIF0/pf/NAT <—> epairXa (bridge0) epairXb <-> jail A recent upgrade broke mailing of IMAP clients running at macOS 10.14.6 (Mojave) und AVM's push service (Fritzbox), but *not* for IMAP clients running at macOS 10.15.7 (Catalina). Strange. Findings at macOS 10.14.6 (examplified for IMAP): 1) mac$ nc -4vw 1 mail.xyz.zzz 993 found 0 associations found 1 connections: 1: flags=82 outif en0 src 1.2.3.4 port 49583 dst 11.22.33.44 port 993 rank info not available TCP aux info available Connection to mail.xyz.zzz port 993 [tcp/imaps] succeeded! 2) mac$ openssl s_client -crlf -connect mail.xyz.zzz:993 -debug CONNECTED(0005) write to 0x7fa32ef01ae0 [0x7fa33080a803] (200 bytes => 200 (0xC8)) - 16 03 01 00 c3 01 00 00-bf 03 03 32 f7 fe fa b4 ...2 0010 - e8 9a 60 38 ef 34 99 70-84 ce dc 1a 08 b8 76 90 ..`8.4.p……v. 0020 - 19 8c 81 f4 a6 37 19 37-09 70 6f 00 00 60 c0 30 .7.7.po..`.0 0030 - c0 2c c0 28 c0 24 c0 14-c0 0a 00 9f 00 6b 00 39 .,.(.$...k.9 0040 - cc a9 cc a8 cc aa ff 85-00 c4 00 88 00 81 00 9d …. 0050 - 00 3d 00 35 00 c0 00 84-c0 2f c0 2b c0 27 c0 23 .=.5./.+.'.# 0060 - c0 13 c0 09 00 9e 00 67-00 33 00 be 00 45 00 9c ...g.3...E.. 0070 - 00 3c 00 2f 00 ba 00 41-c0 11 c0 07 00 05 00 04 .<./...A…….. 0080 - c0 12 c0 08 00 16 00 0a-00 15 00 09 00 ff 01 00 …. 0090 - 00 36 00 0b 00 02 01 00-00 0a 00 08 00 06 00 1d .6.. 00a0 - 00 17 00 18 00 23 00 00-00 0d 00 1c 00 1a 06 01 .#………. 00b0 - 06 03 ef ef 05 01 05 03-04 01 04 03 ee ee ed ed …. 00c0 - 03 01 03 03 02 01 02 03- hanging at that stage forever (and client complaining of its inability to authenticate and reports timeout after 60 seconds) I did identify commit 367740 being responsible for that: mike> svn up -r 367740 Updating '.': Usys/netinet/ip_fastfwd.c Usys/netinet/ip_input.c Usys/netinet/ip_var.h U . Updated to revision 367740. Any Ideas, especially why clients at different OS behave different? FYI: I do have no access to AVM's push service, and very limited access to the macOS 10.14.6 computer. Thanks in advance and with kind regards, Michael P.S. How may I update a local svn copy and simultaneously omit commit 367740 from being applied, or how may I revert commit 367740, only? ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: 12.2-STABLE: Commit 367740 breaks IMAP/SMTP server authentication
Ronald Klop wrote: > On Sun, 22 Nov 2020 14:37:33 +0100, Michael Grimm wrote: >> P.S. How may I update a local svn copy and simultaneously omit commit 367740 >> from being applied, or how may I revert commit 367740, only? > > > From the top of my head you can do something like: > > Assuming your svn checkout is in /usr/src: > cd /usr/src > svn up > svn diff -c -367740 | patch > > This will get the reverse of commit 367740 (because of the -) and patch the > code with it. Thanks, someone else pointed me to: svn merge -c -367740 . Worked as expected. Well, now I am able to omit this commit, but I would love to know what is going on, and why this commit may break 'authentication/certificate exchange/what so ever' of IMAP and SMTP/submission clients running in a VNET jail ... With kind regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: 12.2-STABLE: Commit 367740 breaks IMAP/SMTP server authentication
Hi - Michael Grimm wrote: > Well, now I am able to omit this commit, but I would love to know what is > going on, and why this commit may break 'authentication/certificate > exchange/what so ever' of IMAP and SMTP/submission clients running in a VNET > jail ... It just came to my mind, that I had had a strange issue with my setup almots three years ago: https://lists.freebsd.org/pipermail/freebsd-net/2018-January/049528.html /boot/loader.conf: # needs to become turned off (LRO) in order to restore tcp performance within VNET jails: hw.vtnet.lro_disable="1" hw.vtnet.tso_disable="1" That is FYI, only. I have no clue if that's related anyhow. Regards, Michael ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
[SOLVED] 12.2-STABLE: Commit 367740 breaks IMAP/SMTP server authentication
Hi, I finally managed to solve this issue: the MTU of all bridged network interfaces had to be reduced from 1500 down to 1490. (The external interface was on 1490 already.) I still don't understand why these patches of commit 367740 could cause this, and I do not have the knowledge to understand it. Anyway, I just wanted to let you know. Regards, Michael > On 22. Nov 2020, at 14:37, Michael Grimm wrote: > > Hi, > > I am running 12.2-STABLE and VNET jails, one of which host a recent Dovecot > IMAP and a recent postfix SMTP server. Authentication is forced via TLS/SSL > for both services (ports 587 and 993). Setup is as follows: > > extIF0/pf/NAT <—> epairXa (bridge0) epairXb <-> jail > > A recent upgrade broke mailing of IMAP clients running at macOS 10.14.6 > (Mojave) und AVM's push service (Fritzbox), but *not* for IMAP clients > running at macOS 10.15.7 (Catalina). Strange. > > Findings at macOS 10.14.6 (examplified for IMAP): > > 1)mac$ nc -4vw 1 mail.xyz.zzz 993 > found 0 associations > found 1 connections: > 1: flags=82 > outif en0 > src 1.2.3.4 port 49583 > dst 11.22.33.44 port 993 > rank info not available > TCP aux info available > > Connection to mail.xyz.zzz port 993 [tcp/imaps] succeeded! > > 2)mac$ openssl s_client -crlf -connect mail.xyz.zzz:993 -debug > CONNECTED(0005) > write to 0x7fa32ef01ae0 [0x7fa33080a803] (200 bytes => 200 (0xC8)) > - 16 03 01 00 c3 01 00 00-bf 03 03 32 f7 fe fa b4 ...2 > 0010 - e8 9a 60 38 ef 34 99 70-84 ce dc 1a 08 b8 76 90 ..`8.4.p……v. > 0020 - 19 8c 81 f4 a6 37 19 37-09 70 6f 00 00 60 c0 30 > .7.7.po..`.0 > 0030 - c0 2c c0 28 c0 24 c0 14-c0 0a 00 9f 00 6b 00 39 > .,.(.$...k.9 > 0040 - cc a9 cc a8 cc aa ff 85-00 c4 00 88 00 81 00 9d …. > 0050 - 00 3d 00 35 00 c0 00 84-c0 2f c0 2b c0 27 c0 23 > .=.5./.+.'.# > 0060 - c0 13 c0 09 00 9e 00 67-00 33 00 be 00 45 00 9c > ...g.3...E.. > 0070 - 00 3c 00 2f 00 ba 00 41-c0 11 c0 07 00 05 00 04 .<./...A…….. > 0080 - c0 12 c0 08 00 16 00 0a-00 15 00 09 00 ff 01 00 …. > 0090 - 00 36 00 0b 00 02 01 00-00 0a 00 08 00 06 00 1d .6.. > 00a0 - 00 17 00 18 00 23 00 00-00 0d 00 1c 00 1a 06 01 .#………. > 00b0 - 06 03 ef ef 05 01 05 03-04 01 04 03 ee ee ed ed …. > 00c0 - 03 01 03 03 02 01 02 03- > > hanging at that stage forever > (and client complaining of its inability to authenticate and reports > timeout after 60 seconds) > > > I did identify commit 367740 being responsible for that: > > mike> svn up -r 367740 > Updating '.': > Usys/netinet/ip_fastfwd.c > Usys/netinet/ip_input.c > Usys/netinet/ip_var.h >U . > Updated to revision 367740. > > > Any Ideas, especially why clients at different OS behave different? > > FYI: I do have no access to AVM's push service, and very limited access to > the macOS 10.14.6 computer. > > Thanks in advance and with kind regards, > Michael > > P.S. How may I update a local svn copy and simultaneously omit commit 367740 > from being applied, or how may I revert commit 367740, only? > > > ___ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org" ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"