But the TCP timestamps are impacted by packet loss. You will sometimes get an 
accurate RTT reading, and you will sometimes get multiples of the RTT due to 
packet loss and retransmissions. I would hate to see a line classified as 
bloated when the real problem is simple packet loss. Head of line blocking, 
cumulative acks, yada, yada, yada.

You really need to use a packet oriented protocol (ICMP/UDP) to get a true 
measure of RTT at the application layer. If you can instrument TCP in the 
kernel to make instantaneous RTT available to the application, that might work. 
I am not sure how you would roll that out in a timely manner, though. I think I 
actually wrote some code to do this on BSD many years ago, and it gave pretty 
good results. I was building a terminal server (remember those?) and needed to 
have ~50ms +- 20ms echo times.


Bvs

From: bloat-boun...@lists.bufferbloat.net 
[mailto:bloat-boun...@lists.bufferbloat.net] On Behalf Of Eggert, Lars
Sent: Friday, May 15, 2015 4:18 AM
To: Aaron Wood
Cc: c...@lists.bufferbloat.net; Klatsky, Carl; 
cerowrt-devel@lists.bufferbloat.net; bloat
Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs 
cablemodems

On 2015-5-15, at 06:44, Aaron Wood 
<wood...@gmail.com<mailto:wood...@gmail.com>> wrote:
ICMP prioritization over TCP?

Probably.

Ping in parallel to TCP is a hacky way to measure latencies; not only because 
of prioritization, but also because you don't measure TCP send/receive buffer 
latencies (and they can be large, auto-tuning is not so great.)

You really need to embed timestamps in the TCP bytestream and echo them back. 
See the recent netperf patch I sent.

Lars
_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to