Hi David,

you probably had noticed that the cited paper was about LTE/(5G) where the base 
station operates the scheduler that arbitrates both up- and downstream 
transmissions. And according to the paper that ends up in bursting on the 
upstream (I wonder how L4S with its increases burst sensitivity is going to 
deal with that*)...
I found that paper really nice, short and concise with a simple overview of the 
used grant request system....



*) My bet is that they are going to claim LTE/5G will have to change their 
ways....


> On Dec 1, 2021, at 21:26, David P. Reed <dpr...@deepplum.com> wrote:
> 
> What's the difference between uplink and downlink?  In DOCSIS the rate 
> asymmetry was the issue. But in WiFi, the air interface is completely 
> symmetric (802.11ax, though, maybe not because of centrally polling).
>  
> In any CSMA link (WiFi), there is no "up" or "down". There is only sender and 
> receiver, and each station and the AP are always doing both.
>  
> The problem with shared media links is that the "waiting queue" is 
> distributed, so to manage queue depth, ALL of the potential senders must 
> respond aggressively to excess packets.
>  
> This is why a lot (maybe all) of the silicon vendors are making really bad 
> choices w.r.t. bufferbloat by adding buffering in the transmitter chip 
> itself, and not discarding or marking when queues build up. It's the same 
> thing that constantly leads hardware guys to think that more memory for 
> buffers improves throughput, and only advertising throughput.
>  
> To say it again: More memory *doesn't* improve throughput when the queue 
> depths exceed one packet on average, and it degrades "goodput" at higher 
> levels by causing the ultimate sender to "give up" due to long latency. (at 
> the extreme, users will just click again on a slow URL, causing all the 
> throughput to be "badput", because they force the system to transmit it 
> again, while leaving packets clogging the queues.
>  
> So, if you want good performance on a shared radio medium, you need to squish 
> each flow's queue depth down from sender to receiver to "average < 1 in 
> queue", and also drop packets when there are too many simultaneous flows 
> competing for airtime. And if your source process can't schedule itself 
> frequently enough, don't expect the network to replace buffering at the TCP 
> source and destination - it is not intended to be a storage system.

        With a variable rate link like LTE or WiFi some buffering above 1 in 
queue seems unavoidable, e.g. even if steady state traffic at X Mbps converged 
to 1-in-queue if the rate the drops to say x/10 Mbps all th packets in flight 
will hit the buffers at the upstream end of the bottleneck link, no? If rate 
changes happen rarely, I guess the "average" will still be meaningful, but what 
if rate changes happen often?

Regards
        Sebastian



>  
>  
>  
> On Tuesday, November 30, 2021 7:13pm, "Dave Taht" <dave.t...@gmail.com> said:
> 
> > Money quote: "Figure 2a is a good argument to focus latency
> > research work on downlink bufferbloat."
> > 
> > It peaked at 1.6s in their test:
> > https://hal.archives-ouvertes.fr/hal-03420681/document
> > 
> > --
> > I tried to build a better future, a few times:
> > https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org
> > 
> > Dave Täht CEO, TekLibre, LLC
> > _______________________________________________
> > Cerowrt-devel mailing list
> > Cerowrt-devel@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel

_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to