y'all are mostly missing the point of my question which was why did it have such a dramatic effect on reducing the induced downstream bloat? I have a new theory but I will get to that after a couple more cups of coffee and a spreadsheet.
But on the threads here: I used to use "owamp" to do very accurate one way delay measurements in each direction, but it is a a hassle to configure and mostly required GPS synced ntp clocks. It was seriously overengineered (authentication, etc), and I found I had to extract the raw stats to get the info I needed. Definately agree that pulling out the tcp timestamp data in the rrul test would be good, that measuring statistics at a much finer grain within that test would be good, (the 200ms sampling rate is way too high, but you start heisenbugging it at 20ms), the current rrul latency stats are very misleading when fq is present (for example I mostly just monitor queue length with watch tc) and... patches always appreciated. Wireshark could do a better job in it's graphing tools, it makes me crazy to have to compare two graphed wireshark traces in gimp. Web10g is now up to kernel 3.17, and some more stats collection has been continually entering the kernel (TCP_INFO is gaining more fields, there is also some more snmp mibs accessible). I am very big on moving my testbeds to 4.1 due to all the improvements in the FIB handling.... I had generally hoped to start leveraging the quic and/or webrtc codebases to be able to make more progress in userspace. That has selectable reno or cubic cc, for example - but the existing libraries and code are not thread capable.... A lot of things are now at the "mere matter of coding" point, just not enough coders to go around that aren't busy working on the next pets.com. _______________________________________________ Cerowrt-devel mailing list Cerowrt-devel@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/cerowrt-devel