On Sat, 16 Mar 2024, Colin_Higbie wrote:

At the same time, I do think if you give people tools where latency is rarely an issue (say a 10x improvement, so perception of 1/10 the latency), developers will be less efficient UNTIL that inefficiency begins to yield poor UX. For example, if I know I can rely on latency being 10ms and users don't care until total lag exceeds 500ms, I might design something that uses a lot of back-and-forth between client and server. As long as there are fewer than 50 iterations (500 / 10), users will be happy. But if I need to do 100 iterations to get the result, then I'll do some bundling of the operations to keep the total observable lag at or below that 500ms.

I don't think developers think about latency at all (as a general rule)

they develop and test over their local lan, and assume it will 'just work' over the Internet.

In terms of user experience (UX), I think of there as being "good enough" plateaus based on different use-cases. For example, when web browsing, even 1,000ms of latency is barely noticeable. So any web browser application that comes in under 1,000ms will be "good enough." For VoIP, the "good enough" figure is probably more like 100ms. For video conferencing, maybe it's 80ms (the ability to see the person's facial expression likely increases the expectation of reactions and reduces the tolerance for lag). For some forms of cloud gaming, the "good enough" figure may be as low as 5ms.

1 second for the page to load is acceptable (ot nice), but one second delay in reacting to a clip is unacceptable.

As I understand it, below 100ms is considered 'instantanious response' for most people.

That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't better than 1,000 for web browsing, just that the value for each further incremental reduction in latency drops significantly once you get to that good-enough point. However, those further improvements may open entirely new applications, such as enabling VoIP where before maybe it was only "good enough" for web browsing (think geosynchronous satellites).

the problem is that latency stacks, you click on the web page, you do a dns lookup for the page, then a http request for the page contents, which triggers a http request for a css page, and possibly multiple dns/http requests for libraries

so a 100ms latency on the network can result in multiple second page load times for the user (even if all of the content ends up being cached already)

<snip a bunch of good discussion>

So all taken together, there can be fairly straightforward descriptions of latency and bandwidth based on expected usage. These are not mysterious attributes. It can be easily calculated per user based on expected use cases.

however, the lag between new uses showing up and changes to the network driven by those new uses is multiple years long, so the network operators and engineers need to be proactive, not reactive.

don't wait until the users are complaining before upgrading bandwidth/latency

David Lang
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink

Reply via email to