On Thu, May 17, 2018 at 9:41 AM, Neal Cardwell <ncardw...@google.com> wrote: > > On Thu, May 17, 2018 at 11:40 AM Eric Dumazet <eric.duma...@gmail.com> > wrote: > > On 05/17/2018 08:14 AM, Neal Cardwell wrote: > > > Is there a particular motivation for the cap of 127? IMHO 127 ACKs is > quite > > > a few to compress. Experience seems to show that it works well to have > one > > > GRO ACK for ~64KBytes that triggers a single TSO skb of ~64KBytes. It > might > > > be nice to try to match those dynamics in this SACK compression case, > so it > > > might be nice to cap the number of compressed ACKs at something like 44? > > > (0xffff / 1448 - 1). That way for high-speed paths we could try to keep > > > the ACK clock going with ACKs for ~64KBytes that trigger a single TSO > skb > > > of ~64KBytes, no matter whether we are sending SACKs or cumulative ACKs. > > > 127 was chosen because the field is u8, and since skb allocation for the > ACK > > can fail, we could have cases were the field goes above 127. > > > Ultimately, I believe a followup patch would add a sysctl, so that we can > fine-tune > > this, and eventually disable ACK compression if this sysctl is set to 0 > > OK, a sysctl sounds good. I would still vote for a default of 44. :-) > > > > >> + if (hrtimer_is_queued(&tp->compressed_ack_timer)) > > >> + return; > > >> + > > >> + /* compress ack timer : 5 % of srtt, but no more than 2.5 ms */ > > >> + > > >> + delay = min_t(unsigned long, 2500 * NSEC_PER_USEC, > > >> + tp->rcv_rtt_est.rtt_us * (NSEC_PER_USEC >> > 3)/20); > > > > > > Any particular motivation for the 2.5ms here? It might be nice to match > the > > > existing TSO autosizing dynamics and use 1ms here instead of having a > > > separate new constant of 2.5ms. Smaller time scales here should lead to > > > less burstiness and queue pressure from data packets in the network, > and we > > > know from experience that the CPU overhead of 1ms chunks is acceptable. > > > This came from my tests on wifi really :) > > > I also had the idea to make this threshold adjustable for wifi, like we > did for sk_pacing_shift. > > > (On wifi, we might want to increase the max delay between ACK) > > > So maybe use 1ms delay, when sk_pacing_shift == 10, but increase it if > sk_pacing_shift has been lowered. > > Sounds good to me. > > Thanks for implementing this! Overall this patch seems nice to me. > > Acked-by: Neal Cardwell <ncardw...@google.com> > > BTW, I guess we should spread the word to maintainers of other major TCP > stacks that they need to be prepared for what may be a much higher degree > of compression/aggregation in the SACK stream. Linux stacks going back many > years should be fine with this, but I'm not sure about the other major OSes > (they may only allow sending one MSS per ACK-with-SACKs received). Patch looks really good but Neal's comment just reminds me a potential legacy issue.
I recall at least Apple and Windows TCP stacks still need 3+ DUPACKs (!= a SACK covering 3+ packets) to trigger fast recovery. Will we have an issue there interacting w/ these stacks? > > neal