On Wed, 27 Nov 2024 17:23:53 +0100, Pascal Stumpf wrote:
> On Sat, 23 Nov 2024 15:25:15 +0100, Pascal Stumpf wrote:
> > On Sat, 23 Nov 2024 13:42:45 -0000 (UTC), Stuart Henderson wrote:
> > > On 2024-11-23, Pascal Stumpf <pas...@stumpf.co> wrote:
> > > > Scenario: 7.6-stable running on a gateway, connected to the internet via
> > > > pppoe0 over vlan7, several downstream /24 network segments.  iked(8) is
> > > > serving several clients, running mostly Mac OS, with policies like this:
> > > >
> > > > ikev2 "foo" esp \
> > > >         from 192.168.100.1 to dynamic \
> > > >         from 192.168.5.0/24 to dynamic \
> > > >         from 192.168.50.0/24 to dynamic \
> > > >         [...]
> > > >         peer any \
> > > >         srcid bar dstid foo \
> > > >         config address 192.168.100.0/24
> > > >
> > > > where we have vlanN interfaces carrying the 192.168.5.1/24 etc. etc. 
> > > > atop
> > > > an igc(4) interface and 192.168.100.1/24 is on lo1 for debugging 
> > > > purposes.
> > > >
> > > > There is a relayd running on the gateway itself, with relays listening
> > > > on both the pppoe0 address and 192.168.5.1 (vlan5).
> > > >
> > > > Mac OS sets the MTU on its ipsec0 interface to 1280, so the scrub rules
> > > > in pf.conf look like this:
> > > >
> > > > [...]
> > > > match in on pppoe0 scrub (max-mss 1440, no-df, random-id, reassemble 
> > > > tcp)
> > > > match out on pppoe0 scrub (max-mss 1440)
> > > > match on enc0 all scrub (max-mss 1280)
> > > > [...]
> > > 
> > > MSS should be capped at (MTU - IP header size - TCP header size).
> > > 
> > > IPv4 headers are 20 bytes, TCP between 20 and 60 bytes (20 is quite
> > > common, but 32 is also quite common if TCP timestamps are used).
> > > 
> > > https://lostintransit.se/2024/09/05/mss-mss-clamping-pmtud-and-mtu/
> > 
> > Tested with (max-mss 1228).  Still, the same fragmentation.
> > 
> > > > This works fine for all downstream hosts, with tcpdump showing
> > > > consistent packet sizes of 1356 on pppoe0.  But max-mss seems to be
> > > > ignored for all connections to the gateway host itself, including the
> > > > ones to relayd at 192.168.5.1, resulting in heavy fragmentation:
> > > 
> > > Do you have a "set skip" line?
> > 
> > set skip on enc?  No.
> > 
> > And as I said, the setup works just fine for any host on any of the /24
> > segments behind the gateway.
> > 
> > So I really think this is about 192.168.5.1 being a locally assigned 
> > address.
> 
> Some more observations:
> 
> On a very similar setup on my home router, I have:
> 
> iked.conf:
> ikev2 "roadwarrior" esp \
>         from 192.168.2.0/24 to dynamic \
>         from 192.168.100.0/24 to dynamic \
>         peer any \
>       [...]
> 
> with 192.168.2.1/24 assigned to a vport(4) and the same scrub rule in
> pf.conf:
> 
> match on enc0 all scrub (max-mss 1228)
> 
> Connections to the machines on the 192.168.2.0/24 net are fine, however
> there is heavy fragmentation going on for TCP connections to 192.168.2.1
> over the tunnel.
> 
> Playing around with the max-mss value, I can observe that it is being
> applied to inbound packets, but answers are still huge.  It looks like
> something is relying on them being segmented later (but they aren't).

And, sure enough, the problem goes away by setting net.inet.tcp.tso=0.

There is a bug here in the decision when to rely on a driver's TSO.

> > > -- 
> > > Please keep replies on the mailing list.
> > > 

Reply via email to