On 2024-11-23, Pascal Stumpf <pas...@stumpf.co> wrote:
> Scenario: 7.6-stable running on a gateway, connected to the internet via
> pppoe0 over vlan7, several downstream /24 network segments.  iked(8) is
> serving several clients, running mostly Mac OS, with policies like this:
>
> ikev2 "foo" esp \
>       from 192.168.100.1 to dynamic \
>       from 192.168.5.0/24 to dynamic \
>       from 192.168.50.0/24 to dynamic \
>       [...]
>       peer any \
>       srcid bar dstid foo \
>       config address 192.168.100.0/24
>
> where we have vlanN interfaces carrying the 192.168.5.1/24 etc. etc. atop
> an igc(4) interface and 192.168.100.1/24 is on lo1 for debugging purposes.
>
> There is a relayd running on the gateway itself, with relays listening
> on both the pppoe0 address and 192.168.5.1 (vlan5).
>
> Mac OS sets the MTU on its ipsec0 interface to 1280, so the scrub rules
> in pf.conf look like this:
>
> [...]
> match in on pppoe0 scrub (max-mss 1440, no-df, random-id, reassemble tcp)
> match out on pppoe0 scrub (max-mss 1440)
> match on enc0 all scrub (max-mss 1280)
> [...]

MSS should be capped at (MTU - IP header size - TCP header size).

IPv4 headers are 20 bytes, TCP between 20 and 60 bytes (20 is quite
common, but 32 is also quite common if TCP timestamps are used).

https://lostintransit.se/2024/09/05/mss-mss-clamping-pmtud-and-mtu/

> This works fine for all downstream hosts, with tcpdump showing
> consistent packet sizes of 1356 on pppoe0.  But max-mss seems to be
> ignored for all connections to the gateway host itself, including the
> ones to relayd at 192.168.5.1, resulting in heavy fragmentation:

Do you have a "set skip" line?


-- 
Please keep replies on the mailing list.

Reply via email to