Brian Dickson <brian.peter.dick...@gmail.com> writes: > Rob Seastrom wrote: > >> "Ricky Beam" <jfbeam at >> gmail.com<http://mailman.nanog.org/mailman/listinfo/nanog>> >> writes: >> > >> * On Fri, 29 Nov 2013 08:39:59 -0500, Rob Seastrom <rs at seastrom.com >> <http://mailman.nanog.org/mailman/listinfo/nanog>> wrote: *>> >> * So there really is no excuse on AT&T's part for the /60s on uverse >> 6rd... *> >> * ... *> >> * Handing out /56's like Pez is just wasting address space -- someone *> >> * *is* paying for that space. Yes, it's waste; giving everyone 256 *> >> * networks when they're only ever likely to use one or two (or maybe *> >> * four), is intentionally wasting space you could've assigned to *> >> * someone else. (or **sold** to someone else :-)) IPv6 may be huge to *> >> * the power of huge, but it's still finite. People like you are *> >> * repeating the same mistakes from the early days of IPv4... * There's >> finite, and then there's finite. Please complete the >> following math assignment so as to calibrate your perceptions before >> leveling further allegations of profligate waste. >> Suppose that every mobile phone on the face of the planet was an "end >> site" in the classic sense and got a /48 (because miraculously, >> the mobile providers aren't being stingy). >> Now give such a phone to every human on the face of the earth. >> Unfortunately for our conservation efforts, every person with a >> cell phone is actually the cousin of either Avi Freedman or Vijay >> Gill, and consequently actually has FIVE cell phones on active >> plans at any given time. >> Assume 2:1 overprovisioning of address space because per Cameron >> Byrne's comments on ARIN 2013-2, the cellular equipment providers >> can't seem to figure out how to have N+1 or N+2 redundancy rather >> than 2N redundancy on Home Agent hardware. >> What percentage of the total available IPv6 space have we burned >> through in this scenario? Show your work. >> -r > > > Here's the problem with the math, presuming everyone gets roughly the same > answer: > The efficiency (number of prefixes vs total space) is only achieved if > there is a "flat" network, > which carries every IPv6 prefix (i.e. that there is no aggregation being > done). > > This means 1:1 router slots (for routes) vs prefixes, globally, or even > internally on ISP networks. > > If any ISP has > 1M customers, oops. So, we need to aggregate. > > Basically, the problem space (waste) boils down to the question, "How many > levels of aggregation are needed"? > > If you have variable POP sizes, region sizes, and assign/aggregate towards > customers topologically, the result is: > - the need to maintain power-of-2 address block sizes (for aggregation), > plus > - the need to aggregate at each level (to keep #prefixes sane) plus > - asymmetric sizes which don't often end up being just short of the next > power-of-2 > - equals (necessarily) low utilization rates > - i.e. much larger prefixes than would be suggested by "flat" allocation > from a single pool. > > Here's a worked example, for a hypothetical big consumer ISP: > - 22 POPs with "core" devices > - each POP has anywhere from 2 to 20 "border" devices (feeding access > devices) > - each "border" has 5 to 100 "access" devices > - each access device has up to 5000 customers > > Rounding up each, using max(count-per-level) as the basis, we get: > 5000->8192 (2^13) > 100->128 (2^7) > 20->32 (2^5) > 22->32 (2^5) > 5+5+7+13=30 bits of aggregation > 2^30 of /48 = /18 > This leaves room for 2^10 such ISPs (a mere 1024), from the current /8. > A thousand ISPs seems like a lot, but consider this: the ISP we did this > for, might only have 3M customers. > Scale this up (horizontally or vertically or both), and it is dangerously > close to capacity already. > > The answer above (worked math) will be unique per ISP. It will also drive > consumption at the apex, i.e. the size of allocations to ISPs. > > And root of the problem was brought into existence by the insistence that > every network (LAN) must be a /64. > > That's my 2 cents/observation. > > Brian
At a glance, I think there's an implicit assumption in your calculation that each ISP has to be able to hold the whole world (unlikely) and/or there is no such thing as mobile IP or any other kind of tunneling technology going on within the mobile network (also wrong from everything I understand). Also, I'm not sure where "from the current /8" comes from, as there's a /3 in play (1/8 of the total space, maybe that was it?) and each RIR is getting space in chunks of /12... Re-working your conclusion statement without redoing the math, "This leaves room for 2^15 such ISPs (a mere 16384), from the current /3." Oddly enough, I'm OK with that. :) -r