> On Nov 9, 2022, at 11:18 AM, Zhenlei Huang <zlei.hu...@gmail.com> wrote:
> 
> 
>> On Oct 22, 2022, at 6:14 PM, Hans Petter Selasky <h...@selasky.org 
>> <mailto:h...@selasky.org>> wrote:
>> 
>> Hi,
>> 
>> Some thoughts about this topic.
> 
> Sorry for late response.
> 
>> 
>> Delaying ACKs means loss of performance when using Gigabit TCP connections 
>> in data centers. There it is important to ACK the data as quick as possible, 
>> to avoid running out of TCP window space. Thinking about TCP connections at 
>> 30 GBit/s and above!
> 
> In data centers, the bandwidth is much more and the latency is extremely low 
> (compared to WAN), sub-milliseconds .
> The TCP window space is bandwidth multiply RTT. For a 30 GBit/s network it is 
> about 750KiB . I think that is trivial for a
> datacenter server.
> 
> 4.2.3.2 in RFC 1122 states:
> > in a stream of full-sized segments there SHOULD be an ACK for at least 
> > every second segment 
> Even if the ACK every tenth segment, the impact of delayed ACKs on TCP window 
> is not significant ( at most
>  ten segments not ACKed in TCP send window ).
> 
> Anyway, for datacenter usage the bandwidth is symmetric and the reverse path 
> ( TX path of receiver ) is sufficient.
> Servers can even ACK every segment (no delaying ACK).
> 
>> 
>> I think the implementation should be exactly like it is.
>> 
>> There is a software LRO in FreeBSD to coalesce the ACKs before they hit the 
>> network stack, so there are no real problems there.
> 
> I'm OK with the current implementation.
> 
> I think upper layers (or application) have (business) information to indicate 
> whether delaying ACKs should be employed.
> After googling I found there's a draft [1].
> 
> [1] Sender Control of Delayed Acknowledgments in TCP: 
> https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml 
> <https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml>
Found the html / pdf / txt version of the draft RFC.
https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-pull/

> 
>> 
>> --HPS
>> 
>> 
> 
> Best regards,
> Zhenlei

Reply via email to