Pretty good list, thanks for putting this together.
The only thing I'd add, and I'm not able to formulate it very elegantly, is
this personal insight: One that I would research, because it can be a LOT more
useful in the end-to-end control loop than stuff like ECN, L4S, RED, ...
Fact: Detecting congestion by allowing a queue to build up is a very lagging
indicator of incipient congestion in the forwarding system. The delay added to
all paths by that queue buildup slows down the control loop's ability to
respond by slowing the sources. It's the control loop delay that creates both
instability and continued congestion growth.
Observation: current forwarders forget what they have forwarded as soon as it
is transmitted. This loses all the information about incipient congestion and
"fairness" among multiple sources. Yet, there is no need to forget recent
history at all after the packets have been transmitted.
An idea I keep proposing is the idea of remembering the last K seconds of
packets, their flow ids (source and destination), the arrival time and
departure time, and their channel occupancy on the outbound shared link. Then
using this information to reflect incipient congestion information to the flows
that need controlling, to be used in their control loops.
So far, no one has taken me up on doing the research to try this in the field.
Note: the signalling can be simple (sending ECN flags on all flows that transit
the queue, even though there is no backlog, yet, when the queue is empty but
transient overload seems likely), but the key thing is that we already assume
that recent history of packets is predictive of future overflow.
This can be implemented locally on any routing path that tends to be a
bottleneck link. Such as the uplink of a home network. It should work with TCP
as is if the signalling causes window reduction (at first, just signal by
dropping packets prematurely, but if TCP will handle ECN aggressively - a
single ECN mark causing window reduction, then it will help that, too).
The insight is that from an "information and control theory" perspective, the
packets that have already been forwarded are incredibly valuable for congestion
prediction.
Please, if possible, if anyone actually works on this and publishes, give me
credit for suggesting this.
Just because I've been suggesting it for about 15 years now, and being ignored.
It would be a mitzvah.
On Thursday, September 23, 2021 1:46pm, "Bob McMahon"
<bob.mcma...@broadcom.com> said:
Hi All,
I do appreciate this thread as well. As a test & measurement guy here are my
conclusions around network performance. Thanks in advance for any comments.
Congestion can be mitigated the following ways
o) Size queues properly to minimize/negate bloat (easier said than done with
tech like WiFi)
o) Use faster links on the service side such that a queues' service rates
exceeds the arrival rate, no congestion even in bursts, if possible
o) Drop entries during oversubscribed states (queue processing can't "speed up"
like water flow through a constricted pipe, must drop)
o) Identify aggressor flows per congestion if possible
o) Forwarding planes can signal back the the sources "earlier" to minimize
queue build ups per a "control loop request" asking sources to pace their writes
o) transport layers use techniques a la BBR
o) Use "home gateways" that support tech like FQ_CODEL
Latency can be mitigated the following ways
o) Mitigate or eliminate congestion, particularly around queueing delays
o) End host apps can use TCP_NOTSENT_LOWAT along with write()/select() to
reduce host sends of "better never than late" messages
o) Move servers closer to the clients per fundamental limit of the speed of
light (i.e. propagation delay of energy over the wave guides), a la CDNs
(Except if you're a HFT, separate servers across geography and make sure to
have exclusive user rights over the lowest latency links)
Transport control loop(s)
o) Transport layer control loops are non linear systems so network tooling will
struggle to emulate "end user experience"
o) 1/2 RTT does not equal OWD used to compute the bandwidth delay product,
imbalance and effects need to be measured
o) forwarding planes signaling congestion to sources wasn't designed in TCP
originally but the industry trend seems to be to moving towards this per things
like L4S
Photons, radio & antenna design
o) Find experts who have experience & knowledge, e.g. many do here
o) Photons don't really have mass nor size, at least per my limited
understanding of particle physics and QED though, I must admit, came from
reading things on the internet
Bob
On Mon, Sep 20, 2021 at 7:40 PM Vint Cerf <[ v...@google.com ](
mailto:v...@google.com )> wrote:
see [ https://mediatrust.com/ ]( https://mediatrust.com/ )
v
On Mon, Sep 20, 2021 at 10:28 AM Steve Crocker <[ st...@shinkuro.com ](
mailto:st...@shinkuro.com )> wrote:
Related but slightly different: Attached is a slide some of my colleagues put
together a decade ago showing the number of DNS lookups involved in displaying
CNN's front page.
Steve
On Mon, Sep 20, 2021 at 8:18 AM Valdis Klētnieks <[ valdis.kletni...@vt.edu ](
mailto:valdis.kletni...@vt.edu )> wrote:On Sun, 19 Sep 2021 18:21:56 -0700,
Dave Taht said:
> what actually happens during a web page load,
I'm pretty sure that nobody actually understands that anymore, in any
more than handwaving levels.
I have a nice Chrome extension called IPvFoo that actually tracks the IP
addresses contacted during the load of the displayed page. I'll let you make
a guess as to how many unique IP addresses were contacted during a load
of [ https://www.cnn.com ]( https://www.cnn.com )
...
...
...
145, at least half of which appeared to be analytics. And that's only the
hosts that were contacted by my laptop for HTTP, and doesn't count DNS, or
load-balancing front ends, or all the back-end boxes. As I commented over on
NANOG, we've gotten to a point similar to that of AT&T long distance, where 60%
of the effort of connecting a long distance phone call was the cost of
accounting and billing for the call.
_______________________________________________
Starlink mailing list
[ starl...@lists.bufferbloat.net ]( mailto:starl...@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/starlink ](
https://lists.bufferbloat.net/listinfo/starlink
)_______________________________________________
Starlink mailing list
[ starl...@lists.bufferbloat.net ]( mailto:starl...@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/starlink ](
https://lists.bufferbloat.net/listinfo/starlink )
--
Please send any postal/overnight deliveries to:
Vint Cerf
1435 Woodhurst Blvd
McLean, VA 22102
703-448-0965
until further notice
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for the
use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are not
the intended recipient or the person responsible for delivering the e-mail to
the intended recipient, you are hereby notified that any use, copying,
distributing, dissemination, forwarding, printing, or copying of this e-mail is
strictly prohibited. If you received this e-mail in error, please return the
e-mail to the sender, delete it from your computer, and destroy any printed
copy of it.
_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel