Oliver Jowett wrote:
> Hi
> 
> In the course of commenting on #4522 I decreased the MTU on my DSL
> connection while keeping mtu_fix disabled. Since I have a non-broken ISP
> this shouldn't, in theory, be an issue.
> 
> However I now have 6 or so iptables rules applying TCPMSS PMTU clamping
> to random broken netblocks - including, notably, some of the servers
> needed for MSN Messenger authentication, my local bank, and my local
> news site. And that's just the ones I ran into over the course of the
> last day.
> 
> I know the RC2 announcement said that there was a "significant
> throughput decrease for TCP". Can anyone tell me where that decrease is
> coming from, exactly? Because I'm confused..
I remember reading some discussion about this somewhere, but I can't find
the links right now.

I previously assumed that this was mostly affecting some ISPs using PPPoE
that broke PMTU on the DSLAM side.
Now that I read this, I'm considering switching the default to 1 again,
however probably only on the WAN side.

> As I understand it, when everything is working correctly, this is what
> happens if I have an internal machine with MTU=1500 (ethernet) connected
> via a DSL router (OpenWrt) with a PPPoA connection with MTU=1478:
> 
> (1) outbound SYN on the internal machine with MSS=1460 (1500-40)
> (2) SYNACK, ACK
> (3) An inbound TCP message with DF set and size=1500 arrives at the ISP
> end of the PPPoA connection.
> (4) ISP end generates ICMP unreachable, fragmentation needed, MTU=1478
> (5) remote host receives the ICMP and adjusts MSS to 1478-40=1438, then
> resends a smaller 1478-byte TCP message
> (6) 1478-byte TCP message passes through the PPPoA connection and the
> TCP connection continues.
> 
> If I enable TCPMSS with clamp-to-pmtu then instead we get:
> 
> (1) outbound SYN on an internal machine with MSS=1460 (1500-40)
> (2) the SYN is rewritten to have MSS=1438
> (3) SYNACK, ACK
> (4) An inbound TCP message with DF set and total size = 1478 arrives at
> the ISP side of the PPPoA connection and passes through successfully
> (5) the TCP connection continues.
> 
> So either way we end up with MSS=1438, don't we? So I can't see how TCP
> would end up giving lower throughput. We *know* that in the cases where
> we do PMTU clamping, anything with a larger MSS is going to end up
> trying to fragment and decreasing the MSS anyway, right?
> 
> There is presumably some cost to actually having the TCPMSS rule in the
> firewall chain since it has to dig around in the TCP headers of SYNs.
> However I am wondering if this is a measurable cost. And if you need a
> large number of exceptions for broken remote hosts anyway, doesn't the
> cost of processing the exception rules eventually exceed the cost of
> doing it unconditionally?
> 
> Can anyone clear this up for me? What am I missing?
Makes sense to me, probably the information that I got was a bit off.
I think I'll just switch the default to 1 for the final 8.09 release, since
that's a safe choice.
Thanks for digging into this and reporting back.

- Felix
_______________________________________________
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
http://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel

Reply via email to