David,

Just on that one point that you "don't think developers think about latency at 
all," what developers (en masse, and as managed by their employers) care about 
is the user experience. If they don't think latency is an important part of the 
UX, then indeed they won't think about it. However, if latency is vital to the 
UX, such as in gaming or voice and video calling, it will be a focus. 

Standard QA will include use cases that they believe reflect the majority of 
their users. We have done testing with artificially high latencies to simulate 
geosynchronous satellite users, back when they represented a notable portion of 
our userbase. They no longer do (thanks to services like Starlink and recent 
proliferation of FTTH and even continued spreading of slower cable and DSL 
availability into more rural areas), so we no longer include those high 
latencies in our testing. This does indeed mean that our services will probably 
become less tolerant of higher latencies (and if we still have any 
geosynchronous satellite customers, they may resent this possible degradation 
in service). Some could call this lazy on our part, but it's just doing what's 
cost effective for most of our users. 

I'm estimating, but I think probably about 3 sigma of our users have typical 
latency (unloaded) of under 120ms. You or others on this list probably know 
better than I what fraction of our users will suffer severe enough bufferbloat 
to push a perceptible % of their transactions beyond 200ms. 

Fortunately, in our case, even high latency shouldn't be too terrible, but as 
you rightly point out, if there are many iterations, 1s minimum latency could 
yield a several second lag, which would be poor UX for almost any application. 
Since we're no longer testing for that on the premise that 1s minimum latency 
is no longer a common real-world scenario, it's possible those painful lags 
could creep into our system without our knowledge.

This is rational and what we should expect and want application and solution 
developers to do. We would not want developers to spend time, and thereby 
increase costs, focusing on areas that are not particularly important to their 
users and customers. 

Cheers,
Colin


-----Original Message-----
From: David Lang <da...@lang.hm> 
Sent: Saturday, March 16, 2024 7:06 PM
To: Colin_Higbie <chigb...@higbie.name>
Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: RE: [Starlink] It’s the Latency, FCC

On Sat, 16 Mar 2024, Colin_Higbie wrote:

> At the same time, I do think if you give people tools where latency is 
> rarely an issue (say a 10x improvement, so perception of 1/10 the 
> latency), developers will be less efficient UNTIL that inefficiency 
> begins to yield poor UX. For example, if I know I can rely on latency 
> being 10ms and users don't care until total lag exceeds 500ms, I might 
> design something that uses a lot of back-and-forth between client and 
> server. As long as there are fewer than
> 50 iterations (500 / 10), users will be happy. But if I need to do 100 
> iterations to get the result, then I'll do some bundling of the 
> operations to keep the total observable lag at or below that 500ms.

I don't think developers think about latency at all (as a general rule)

_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink

Reply via email to