On 06/01/11 16:01, John Levine wrote: >> Still, the idea that "nobody will scan a /64" reminds me of the days >> when 640K ought to be enough for anybody, ... > > We really need to wrap our heads around the orders of magnitude > involved here. If you could scan an address every nanosecond, which I > think is a reasonable upper bound what with the speed of light and > all, it would still take 500 years to scan a /64. Enumerating all the > addresses will never be practical. But there's plenty of damage one > can do with a much less than thorough enumeration.
I'm probably ruining an interview question from $COMPANYTHATDIDN'THIREME but think just of a 64-bit counter, *if* you had the ability to iterate through 32-bits every second[1] it still takes ~136 years to go all the way through 64 bits. I don't know about you, but that doesn't worry me. At that point it's a straight bandwidth DoS. What makes much more sense is mapping the first /112 or so of a subnet, the last /112 or so, that will catch most static hosts and routers, then if you really want just iterate through the 2^46 valid assigned MAC's[2], much less if you make some assumptions about which OUI's are likely to exist on a subnet[3]. Julien 1: ie, think of a 4.3ish Ghz CPU that can do "i++ and jump to 0" in a single instruction 2: One bit lost for broadcast, one bit for local/global addresses 3: Skipping all unassigned is obvious, but there's a huge amount that will match systems you'll never care about, 2^36 is probably not far off.