On Wed, 1 May 2024, Eugene Y Chang wrote:
Thanks David,
On Apr 30, 2024, at 6:12 PM, David Lang <da...@lang.hm> wrote:
On Tue, 30 Apr 2024, Eugene Y Chang wrote:
I’m not completely up to speed on the gory details. Please humor me. I am
pretty good on the technical marketing magic.
What is the minimum configuration of an ISP infrastructure where we can show an
A/B (before and after) test?
It can be a simplified scenario. The simpler, the better. We can talk through
the issues of how minimal is adequate. Of course and ISP engineer will argue
against simplicity.
I did not see a very big improvement on a 4/.5 dsl link, but there was
improvement.
Would a user feel the improvement with a 10 minute session:
shopping on Amazon?
using Salesforce?
working with a shared Google doc?
When it's only a single user, they are unlikely to notice any difference.
But if you have one person on zoom, a second downloading something, and a third
on Amazon, it doesn't take much to notice a difference.
if you put openwrt on the customer router and configure cake with the targeted
bandwith at ~80% of line speed, you will usually see a drastic improvement for
just about any connection.
Are you saying some of the benefits can be realized with just upgrading the
subscriber’s router? This makes adoption harder because the subscriber will
lose the ISP’s support for any connectivity issues. If a demo impresses the
subscribers, the ISP still needs to embrace this change; otherwise the ISP
will wash their hands of any subscriber problems.
Yes, just upgrading the subscriber's device with cake and configuring it
appropriately largely solves the problem (at the cost of sacraficing bandwith
because cake isn't working directly on the data flowing from the ISP to the
client, and so it has to work indirectly to get the Internet server to slow down
instead and that's a laggy, imperect work-around. If the ISPs router does active
queue management with fq_codel, then you don't have to do this.
This is how we know this works, many of use have been doing this for years (see
the bufferbloat mailing list and it's archives_
If you can put fq_codel on both ends of the link, you can usually skip capping
the bandwidth.
This is good if this means the benefits can be achieved with just the CPE. This
also limits the changes to subscribers that care.
fq_codel on the ISPs router for downlink, and on the subscribers router for
uplink.
putting cake on the router on the subscriber's end and tuning it appropriately
can achieve most of the benefit, but is more work to configure.
unfortunantly, it's not possible to just add this to the ISPs existing hardware
without having the source for the firmware there (and if they have their queues
in ASICs it's impossible to change them.
Is this just an alternative to having the change at the CPE?
Yes this is harder for routers in the network.
simple fq_codel on both ends of the bottleneck connection works quite well
without any configuration. Cake adds some additional fairness capabilities and
has a mode to work around the router on the other end of the bottleneck not
doing active queue management
If you can point at the dramatic decrease in latency, with no bandwidth losses,
that Starlink has achieved on existing hardware, that may help.
This is good to know for the engineers. This adds confusion with the
subscribers.
There are a number of ISPs around the world that have implemented active queue
management and report very good results from doing so.
Can we get these ISPs to publically report how they have achieved great latency
reduction?
We can help them get credit for caring about their subscribers. It would/could
be a (short term) competitive advantage.
Of course their competitors will (might) adopt these changes and eliminate the
advantage, BUT the subscribers will retain glow of the initial marketing for a
much longer time.
several of them have done so, I think someone else posted a report from one in
this thread.
But showing that their existing hardware can do it when their upstream vendor
doesn't support it is going to be hard.
Is the upstream vendor a network provider or a computing center?
Getting good latency from the subscriber, through the access network to the
edge computing center and CDNs would be great. The CDNs would harvest the
benefits. The other computing configurations would have make the change to be
competitive.
I'm talking about the manufacturer of the routers that the ISPs deploy at the
last hop before getting to the subscriber, and the router on the subscriber end
of the link (although most of those are running some variation of openWRT, so
turning it on would not be significant work for the manufacturer)
We wouild have done our part at pushing the next round of adoption.
Many of us have been pushing this for well over a decade. Getting Starlink's
attention to address their bufferbloat issues is a major success.
David Lang
Gene
David Lang
We will want to show the human visible impact and not debate good or not so
good measurements. If we get the business and community subscribers on our
side, we win.
Note:
Stage 1 is to show we have a pure software fix (that can work on their
hardware). The fix is “so dramatic” that subscribers can experience it without
debating measurements.
Stage 2 discusses why the ISP should demand that their equipment vendors add
this software. (The software could already be available, but the ISP doesn’t
think it is worth the trouble to enable it.) Nothing will happen unless we stay
engaged. We need to keep the subscribers engaged, too.
Should we have a conference call to discuss this?
Gene
----------------------------------------------
Eugene Chang
IEEE Life Senior Member
On Apr 30, 2024, at 3:52 PM, Jim Forster <j...@connectivitycap.com> wrote:
Gene, David,
‘m
Agreed that the technical problem is largely solved with cake & codel.
Also that demos are good. How to do one for this problem>
— Jim
The bandwidth mantra has been used for so long that a technical discussion
cannot unseat the mantra.
Some technical parties use the mantra to sell more, faster, ineffective
service. Gullible customers accept that they would be happy if they could
afford even more speed.
Shouldn’t we create a demo to show the solution?
To show is more effective than to debate. It is impossible to explain to some
people.
Has anyone tried to create a demo (to unseat the bandwidth mantra)?
Is an effective demo too complicated to create?
I’d be glad to participate in defining a demo and publicity campaign.
Gene
On Apr 30, 2024, at 2:36 PM, David Lang <da...@lang.hm <mailto:da...@lang.hm>
<mailto:da...@lang.hm <mailto:da...@lang.hm>>> wrote:
On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
I am always surprised how complicated these discussions become. (Surprised
mostly because I forgot the kind of issues this community care about.) The
discussion doesn’t shed light on the following scenarios.
While watching stream content, activating controls needed to switch content
sometimes (often?) have long pauses. I attribute that to buffer bloat and high
latency.
With a happy household user watching streaming media, a second user could have
terrible shopping experience with Amazon. The interactive response could be (is
often) horrible. (Personally, I would be doing email and working on a shared
doc. The Amazon analogy probably applies to more people.)
How can we deliver graceful performance to both persons in a household?
Is seeking graceful performance too complicated to improve?
(I said “graceful” to allow technical flexibility.)
it's largely a solved problem from a technical point of view. fq_codel and cake
solve this.
The solution is just not deployed widely, instead people argue that more
bandwidth is needed instead.
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink