> On 26 Jan 2022, at 5:24 pm, Eliot Lear <l...@lear.ch> wrote:
> 
> [copy architecture-discuss]
> 
> Geoff,
> 
> This is a pretty good characterization.  In fact, it's exactly where we went 
> in the NSRG nearly 20 years ago, just after MO first kicked out 8+8.  For 
> people's reference, we looked at naming at different levels, including at L3, 
> in DNS, URNs (which were relatively new things then), HIP, etc.  There were 
> then several efforts in both the IRTF and IETF to deal with portability of 
> connectivity in transport.  I think QUIC gets a lot of that right.  QUIC – at 
> least at the moment – as some limitations for local use (either you have 
> certs or some sort of prearranged bidirectional authentication).  I think 
> it's a fair engineering assumption that those will be kinked out.
> 
> With all of that having been said, I go back to Dirk's note: what properties 
> do we need at L3?
> 
>       • If QoS is still a thing, then admission control has to be part of the 
> story.
>       • There is a tussle between endpoint privacy and the endpoint itself 
> being a threat.  In the latter case, the endpoint has to be identified.  But 
> to whom?
> As you describe, a lot of routing has moved up a layer.  Sort of.  But not 
> all.  CDNs need to be well aware of topology, and that comes from L3, perhaps 
> with additional OOB info.
> 
> So... what's missing from L3 at this point that we need, and is it even a 
> good question to ask?  I ask that because I have seen recently a retrenching 
> AWAY from IPv6.  If that is happening, what makes us think that any new L3 
> beyond IPv6 would ever get adopted?  OR... what is missing from IPv6 that 
> would cause people to move?
> 


Hi Eliot,

Thanks for your thoughtful response.

In my mind “What is the role of IP addresses in today’s Internet?” and “What 
properties / semantic role do we need from, or implicitly assume, from IP 
addresses” are quite distinct questions.

Some brief background (which I am sure you are aware of, but to help me 
organise my thoughts, and possibly for the benefit of others on this list):…

Its pretty clear that we commenced down the road of re-casting the role of IP 
addresses at about the same time as we started down the track of consumer 
deployment via dial-up ISPs. We introduced the notion of time-shared addresses, 
where the IP address you got was temporal for the life of the connection, and 
we subsequently introduced the notion of port-based address sharing via NATs. I 
don't think issues of impending scarcity wewre the predominate driving factors 
here - I think it was a more mundane issue of cost transfer to the consumer. 
i.e. from the ISP’s perspective we headed donw the shared address road becuase 
it was cheaper for the industry at the time to do so.

At around the same time (late 80’s early 90’s) we heading down the “running out 
of addresses” path which ultimately lead to the design of IPv6.

It is useful here to contrast the differences in these two approaches - the 
latter was an IETF-led command-and-control effort that attempted to anticipate 
future industry needs and produce a technology that would meet these needs. On 
the other hand the industry was being driven by two imperatives, one unchecked 
demand that completely swamped any efforts to satiate it (we were building as 
fast as we could, not as fast as consumers wanted), and the imperative to strip 
cost. 

The recasting of the role of addresses was not a particularly deliberative 
process then, or now. It was an industry response to the prevailing conditions, 
and in a deregulated market-based activity that’s the only driving factor out 
there. 

If we are going to repeat the IPv6 exercise and attempt to second-guess the 
future market for the means of delivery of digital artifacts to consumers, then 
I guess we need to understand what is driving this industry today and in the 
near future. It appears to be true that CDNs feature big time right now. If you 
include video streaming data then I have heard figures of between 70% to as 
high as 90% of all delivered data being video streaming (No, I have not seen 
good public data to confirm these mutterings - I wish I did!). It also appears 
that enterprise network demand is morphing into the same CDNs as cloud 
services. The shared public network and its infrastructure is being 
marginalised (and privatised).

You raise the ghost of QoS in your message. I am reminded of Andrew Odlzyko’s 
(http://www.dtc.umn.edu/~odlyzko/) arguments at the time: the cheapest way to 
provision QoS for your customers is to buy more transmission capacity. True 
then. True today. CDNs exploit today’s environment of abundance of computing 
and storage to eliminate distance in communications. By bringing content and 
service under the noses of consumers we eliminate the cost and performance 
issues of accessing remote services. The service outcomes are faster, cheaper 
and generally more resilient in the CDN world. The “public” world is now in the 
throes of the death of transit and the public network has shrunk to the last 
mile access network. Why? Ultimately the answer is "It's cheaper this way!”. 
The industry is not going to head back to the rationing world of QoS anytime 
soon, if ever, in my opinion.

So why do we even need unique addressing any more? Surely all I need to do is 
to distinguish myself from the other clients in the service cone of my local 
CDN. Why do I need to use an addresses that distinguishes me form the other 
billions of client endpoints? Is it for the few residual applications that have 
not yet been sucked into the CDN world? The issue here is that uniqueness 
costs. And why should I spend a disproportionate amount of resources to support 
a function used by a residual trace amount of traffic? Sooner or later I will 
cut out that cost and not do it any more. As the CDN’s continue to exploit the 
abundance of computing and storage the current shift of more points of presence 
positioned ever closer to the end clients will continue.

There is also a second factor, that of sunk cost. Nobody wants to pay to 
upgrade existing infrastructure. Nobody. So those who want to change things 
build around, over and tunnel through existing infrastructure. In a deregulated 
world where piecemeal uncoordinated actions predominate, the level of 
coordination and orchestration required to uplift common shared infrastructure 
is simply impossible. We say to ourselves that we outgrew flag days on the 
Internet many decades ago, and thats true, but at times we appear not to 
understand precisely what that implies about today. We have built an 
application-based superstructure of encapsulated tunnels in the network (QUIC) 
that neatly circumvents the entire question of infrastructure renewal. Whereas 
IPv6 has been head-butting against the sunk cost of existing infrastructure for 
more than two decades then dramatic rise of QUIC, BBR, SVCB and HTTPS, and 
similar application-level technologies attests to the application world’s 
extreme distaste to engage with the existing infrastructure.

For me the question is not about IPv6 any more - that is largely a question 
whose answer really has little of industry relevance to offer. The question is 
more about the CDN world and the way applications create their own environment 
in a manner that is as disengaged and isolated from the underlying 
infrastructure as possible. (and if you think about it that is a pretty exact 
replay of the way IP’s stateless packet-based hop-by-hop forwarding treated the 
circuit-switched telephone infrastructure a few decades ago! What goes around, 
comes around!).

Then, as now, the issue was NOT “What’s the answer?” but “Whats the right 
question to ask?” - at least for me.


regards,

   Geoff






_______________________________________________
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area

Reply via email to