> 
> Let's say I'm a national ISP, using 2001:db8::/32. I divide it like so:
> 
> - I reserve 1 bit for future allocation schemes, leaving me a /33;
> - 2 bits for network type (infrastructure, residential, business, LTE): /35
> - 3 bits for geographic region, state, whatever: /38
> - 5 bits for PoP, or city: /43
> 
> This leaves me 5 bits for end sites: no joy.

Here’s the problem… You started at the wrong end and worked in the wrong 
direction in your planning.

Let’s say you’re a national ISP. Let’s say you want to support 4 levels of 
aggregation.
Let’s say that at the lowest level (POP/City) you serve 50,000 end-sites in 
your largest POP/City. (16 bits)
Let’s say you plan to max out at 32 POPs/Cities per region (your number from 
above) (5 bits)
Let’s say you plan to divide the country into 8 regions (3 bits)
Let’s say for some reason you want to break your aggregation along the lines of 
service class (infrastructure, residential, business)
        as your top level division (rarely a good idea, but I’ll go with it for 
now) and that you have 4 service classes (2 bits)
Further, let’s say you decide to set aside half your address space for “future 
allocation schemes”.

Each POP needs a /32.
We can combine the Region/POP number into an 8-bit field — You need a /24 per 
Region.
You need 3 additional bits for your higher level sub-divisions. Let’s round to 
a nibble boundary and give you a /20.

With that /20, you can support up to 67 Million end sites in your first plan 
still leaving 3/4 of your address space fallow.

(That’s at /48 per end-site, by the way).

Now, let’s consider: 7 Billion People, each of which represents 32 different 
end-sites — 224 billion end-sites world wide.
224,000,000,000 / 67,000,000 = 3,344 (rounded up) total ISPs requiring /20s to 
serve every possible end-site on the
planet.


There are 1,048,576 /20s total, so after allocating all the ISPs in the world 
/20s, we still have 1,045,232 remaining.

Let’s assume that every end-site goes with dual-address multi-homing (an IPv6 
prefix from each provider).

We are now left with only 1,041,888 /20s remaining. You still haven’t put a 
dent in it.

Even if we divide by 8 and just consider the current /3 being allocated as 
global unicast, you still have 130,236 free /20s
left.

> Granted, this is just a silly example, and I don't have to divide my
> address space like this. In fact, I really can't, unless I want to have
> more than 32 customers per city. But I don't think it's a very
> far-fetched example.

It’s not… It’s a great example of how not to plan your address space in IPv6.

However, if we repeat the same exercise in the correct direction, not only does 
each of your end-sites get a /48, you get the /20 you need in order to properly 
deploy your network. You get lots of space left over, and we still don’t make a 
dent in the IPv6 free pool. Everyone wins.

> Perhaps I'm missing something obvious here, but it seems to me that it
> would've been nice to have these kinds of possibilities, and more. It
> seems counterintuitive, especially given the "IPv6 way of thinking"
> which is normally encouraged: "stop counting beans, this isn't IPv4”.

The problem is that you not only stopped counting beans, you stopped counting 
bean piles and you lost track of just how big the pile that you are making 
smaller piles from really is.

I hope that this will show you a better way.

Owen

Reply via email to