Beautifully said, David Lang. I completely agree.

At the same time, I do think if you give people tools where latency is rarely 
an issue (say a 10x improvement, so perception of 1/10 the latency), developers 
will be less efficient UNTIL that inefficiency begins to yield poor UX. For 
example, if I know I can rely on latency being 10ms and users don't care until 
total lag exceeds 500ms, I might design something that uses a lot of 
back-and-forth between client and server. As long as there are fewer than 50 
iterations (500 / 10), users will be happy. But if I need to do 100 iterations 
to get the result, then I'll do some bundling of the operations to keep the 
total observable lag at or below that 500ms. 

I remember programming computer games in the 1980s and the typical RAM users 
had increased. Before that, I had to contort my code to get it to run in 32kB. 
After the increase, I could stretch out and use 48kB and stop wasting time 
shoehorning my code or loading-in segments from floppy disk into the limited 
RAM. To your point: yes, this made things faster for me as a developer, just as 
the latency improvements ease the burden on the client-server application 
developer who needs to ensure a maximum lag below 500ms.

In terms of user experience (UX), I think of there as being "good enough" 
plateaus based on different use-cases. For example, when web browsing, even 
1,000ms of latency is barely noticeable. So any web browser application that 
comes in under 1,000ms will be "good enough." For VoIP, the "good enough" 
figure is probably more like 100ms. For video conferencing, maybe it's 80ms 
(the ability to see the person's facial expression likely increases the 
expectation of reactions and reduces the tolerance for lag). For some forms of 
cloud gaming, the "good enough" figure may be as low as 5ms. 

That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't 
better than 1,000 for web browsing, just that the value for each further 
incremental reduction in latency drops significantly once you get to that 
good-enough point. However, those further improvements may open entirely new 
applications, such as enabling VoIP where before maybe it was only "good 
enough" for web browsing (think geosynchronous satellites).

In other words, more important than just chasing ever lower latency, it's 
important to provide SUFFICIENTLY LOW latency for users to perform their 
intended applications. Getting even lower is still great for opening things up 
to new applications we never considered before, just like faster CPU's, more 
RAM, better graphics, etc. have always done since the first computer. But if 
we're talking about measuring what people need today, this can be done fairly 
easily based on intended applications. 

Bandwidth scales a little differently. There's still a "good enough" level 
driven by time for a web page to load of about 5s (as web pages become ever 
more complex and dynamic, this means that bandwidth needs increase), 1Mbps for 
VoIP, 7Mbps UL/DL for video conferencing, 20Mbps DL for 4K streaming, etc. In 
addition, there's also a linear scaling to the number of concurrent users. If 1 
user needs 15Mbps to stream 4K, 3 users in the household will need about 45Mbps 
to all stream 4K at the same time, a very real-world scenario at 7pm in a home. 
This differs from the latency hit of multiple users. I don't know how latency 
is affected by users, but I know if it's 20ms with 1 user, it's NOT 40ms with 2 
users, 60ms with 3, etc. With the bufferbloat improvements created and put 
forward by members of this group, I think latency doesn't increase by much with 
multiple concurrent streams.

So all taken together, there can be fairly straightforward descriptions of 
latency and bandwidth based on expected usage. These are not mysterious 
attributes. It can be easily calculated per user based on expected use cases. 

Cheers,
Colin

-----Original Message-----
From: David Lang <da...@lang.hm> 
Sent: Friday, March 15, 2024 7:08 PM
To: Spencer Sevilla <spencer.builds.netwo...@gmail.com>
Cc: Colin_Higbie <chigb...@higbie.name>; Dave Taht via Starlink 
<starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] Itʼs the Latency, FCC

one person's 'wasteful resolution' is another person's 'large enhancement'

going from 1080p to 4k video is not being wasteful, it's opting to use the 
bandwidth in a different way.

saying that it's wasteful for someone to choose to do something is saying that 
you know better what their priorities should be.

I agree that increasing resources allow programmers to be lazier and write apps 
that are bigger, but they are also writing them in less time.

What right do you have to say that the programmer's time is less important than 
the ram/bandwidth used?

I agree that it would be nice to have more people write better code, but 
everything, including this, has trade-offs.

David Lang

_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink

Reply via email to