The original message didn't show up on the list. I'm assuming it's because the filters didn't like the attached postscript. I posted PDFs of the figures on the web:
http://www.psc.edu/~jheffner/tmp/a.pdf http://www.psc.edu/~jheffner/tmp/b.pdf http://www.psc.edu/~jheffner/tmp/c.pdf -John ---------- Forwarded message ---------- Date: Mon, 16 Oct 2006 15:55:53 -0400 (EDT) From: John Heffner <[EMAIL PROTECTED]> To: David Miller <[EMAIL PROTECTED]> Cc: netdev <netdev@vger.kernel.org> Subject: [PATCH] Bound TSO defer time This patch limits the amount of time you will defer sending a TSO segment to less than two clock ticks, or the time between two acks, whichever is longer. On slow links, deferring causes significant bursts. See attached plots, which show RTT through a 1 Mbps link with a 100 ms RTT and ~100 ms queue for (a) non-TSO, (b) currnet TSO, and (c) patched TSO. This burstiness causes significant jitter, tends to overflow queues early (bad for short queues), and makes delay-based congestion control more difficult. Deferring by a couple clock ticks I believe will have a relatively small impact on performance. Signed-off-by: John Heffner <[EMAIL PROTECTED]> diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 0e058a2..27ae4b2 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -341,7 +341,9 @@ #endif int linger2; unsigned long last_synq_overflow; - + + __u32 tso_deferred; + /* Receiver side RTT estimation */ struct { __u32 rtt; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 9a253fa..3ea8973 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1087,11 +1087,15 @@ static int tcp_tso_should_defer(struct s u32 send_win, cong_win, limit, in_flight; if (TCP_SKB_CB(skb)->flags & TCPCB_FLAG_FIN) - return 0; + goto send_now; if (icsk->icsk_ca_state != TCP_CA_Open) - return 0; + goto send_now; + /* Defer for less than two clock ticks. */ + if (!tp->tso_deferred && ((jiffies<<1)>>1) - (tp->tso_deferred>>1) > 1) + goto send_now; + in_flight = tcp_packets_in_flight(tp); BUG_ON(tcp_skb_pcount(skb) <= 1 || @@ -1106,8 +1110,8 @@ static int tcp_tso_should_defer(struct s /* If a full-sized TSO skb can be sent, do it. */ if (limit >= 65536) - return 0; - + goto send_now; + if (sysctl_tcp_tso_win_divisor) { u32 chunk = min(tp->snd_wnd, tp->snd_cwnd * tp->mss_cache); @@ -1116,7 +1120,7 @@ static int tcp_tso_should_defer(struct s */ chunk /= sysctl_tcp_tso_win_divisor; if (limit >= chunk) - return 0; + goto send_now; } else { /* Different approach, try not to defer past a single * ACK. Receiver should ACK every other full sized @@ -1124,11 +1128,17 @@ static int tcp_tso_should_defer(struct s * then send now. */ if (limit > tcp_max_burst(tp) * tp->mss_cache) - return 0; + goto send_now; } - + /* Ok, it looks like it is advisable to defer. */ + tp->tso_deferred = 1 | (jiffies<<1); + return 1; + +send_now: + tp->tso_deferred = 0; + return 0; } /* Create a new MTU probe if we are ready. - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html