Lachlan Andrew wrote:
Thanks Stephen.
A related problem (largely due to the published algorithm itself) is
that Illinois is very aggressive when it over-estimates the maximum
RTT.
At high load (say 200Mbps and 200ms RTT), a backlog of packets builds
up just after a loss, causing the RTT estimate to become large. This
makes Illinois think that *all* losses are due to corruption not
congestion, and so only back off by 1/8 instead of 1/2.
I can't think how to fix this except by better RTT estimation, or
changes to Illinois itself. Currently, I ignore RTT measurements when
sacked_out != 0 and have a heuristic "RTT aging" mechanism, but
that's pretty ugly.
Cheers,
Lachlan
Ageing the RTT estimates needs to be done anyway.
Maybe something can be reused from H-TCP. The two are closely related.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html