Michael Sturtz wrote on 25/10/2019 16:21:
Nick I agree!  The problem is from an operational support and
protocol level we created this monster by selling the idea of "end to
end connectivity" and "every end site will get a /64" that has been
sold to even end users.

The problem was more a cultural thing in the ietf - people wanted devices to have autonomy and be able to select their own addressing mechanism, and not be subject to the whims of the network operator. Hence the intent behind /64, and SLAAC, and the difficulties that the IETF has with DHCP.

The problem was (and is) that this vision didn't align well with reality, particularly enterprise but also content hosting and other deployment scenarios.

I understand that the ISPs really don’t want
customers to be able to serve content from consumer connections.
This is likely why they are randomly changing the /64 allocated to
the end sites especially on consumer lines.

The reason for this relates to address aggregation at the ISP rather than wanting to prevent consumers from serving content. Honestly most ISPs don't care whether people put content services on their house connections because usually this doesn't create much extra cost (DOCSIS and cell-phone systems excluded). What does cost a lot, though, is when you have massive prefix disaggregation and you need to deal with provisioning hell and gigantic IGP tables in order to provide people static assignments, even if most people don't really need them. This is why it's mostly a commercial problem rather than a protocol issue.

event occurs.  I have personal experience with multiple devices that
use SLAAC breaking connectivity for some indeterminate period of time
when a network renumber event occurs.  Yes this could be due to
poorly implemented end devices etc. but the end point is people just
disable IPv6 because of the headaches caused by it.
Yep.

Nick

Reply via email to