On 6/1/21 15:28, Livingood, Jason via NANOG wrote:
I have seen a lot of questions about what is needed for
video/eLearning/telehealth. IMO the beauty of those apps is that they
use adaptive bitrate protocols and can work in a wide range of last
mile environments – even quite acceptably via mobile network while you
are in transit. In my experience most of the challenges people
experience are due to home LAN (especially WiFi) issues, with working
latency an underlying issue (aka latency under load).
Some recent papers from NetForecast on video conferencing
(https://www.netforecast.com/wp-content/uploads/NFR5137-Videoconferencing_Internet_Requirements.pdf
<https://www.netforecast.com/wp-content/uploads/NFR5137-Videoconferencing_Internet_Requirements.pdf>)
and eLearning
(https://www.netforecast.com/wp-content/uploads/NFR5141-eLearning-Bandwidth-Requirements.Final_.pdf
<https://www.netforecast.com/wp-content/uploads/NFR5141-eLearning-Bandwidth-Requirements.Final_.pdf>)
were based on observed actual usage rather than theoreticals. What
caught my eye was their unique focus in the 1^st paper in Figure 8 –
laying out the rationale for a network “latency budget”. In essence,
after 580 ms of delay someone will notice audio delay and feel the
session is bad. A conference platform’s clients & servers may use up
300 ms of their own in processing, leaving about 280 ms for the
network. If you working latency starts to exceed that on the LAN (not
uncommon) then user QoE degrades.
I'm based in Johannesburg.
The nearest Zoom cloud we register to for sessions is in Europe, +/-
150ms - 170ms, depending on how high the sun is in the sky.
So it's one thing for network operators to build a service with a decent
latency budget. But the services that run over that network also need to
do their part in cutting that ultimate latency down. Sometimes they are
proactive about. Other times they get hit on by the network operators.
In the middle is when both sides magically converge.
Mark.