On Thu, 2024-01-18 at 12:31 +0100, Ralph Aichinger wrote: [...] > So it seems clamping the mss on the NAT/PPPoE-Machine running Debian no > longer works. For this I use/used the follwing rules: > > iifname "ppp0" tcp flags syn tcp option maxseg size set rt mtu; > oifname "ppp0" tcp flags syn tcp option maxseg size set rt mtu; > > setting a specific mtu as a constant instead of "rt mtu" does not help > either.
I have the same options in the forward chain except that I haven't qualified them with an interface name. Didn't occur to me that I would need to do that as there are only two networks my LAN and 'the internet'. In case it helps, my complete forward chain is below. From the comments with links to Stack Exchange it's obvious I hit the MTU size problem and had to fix it... chain forward { type filter hook forward priority 0; policy drop; # Count packets iifname $DEV_PRIVATE counter name cnt_forward_out iifname $DEV_WORLD counter name cnt_forward_in # Allow traffic from established and related packets, drop invalid ct state vmap { established : accept, related : accept, invalid : drop } # Fix some sites not working correctly by hacking MTU size # See https://unix.stackexchange.com/questions/658952/router-with-nftables-doesnt-work-well # Also https://unix.stackexchange.com/questions/672742/why-mss-clamping-in-iptables-nft-seems-to-take-no-effect-in-nftables tcp flags syn tcp option maxseg size set rt mtu # connections from the internal net to the internet or to other # internal nets are allowed iifname $DEV_PRIVATE accept # the rest is dropped by the above policy } -- Tixy