[ I'm also posting a separate copy to dns-operati...@dns-oarc.net ]

In light of the observations in:

  
https://tools.ietf.org/html/draft-york-dnsop-deploying-dnssec-crypto-algs-05#section-2.3.1

I thought it would be useful to take another look at current practice.
To that end, I gathered responses to NSEC3PARAM queries from the 5.147
million DNSSEC-signed domains I'm tracking as part of the ongoing DANE
SMTP survey.  This covers all the DNSSEC-signed domains from zones where
I have a full field (.com, .net, .org, .se, .nu and a bunch of the more
recent gTLDs for which CZDS provides access).  So the coverage is primarily
incomplete just for various ccTLDs (.de, .nl, .ru, .uk, ...) where I've
managed to collect around around 60% of the domains from other sources.

The overall sample characteristics are:

  5126588    successful NSEC3PARAM lookups
    20820    failed lookups (ServFail or timeout)
  3598119    NSEC3PARAM RRsets with 1 record
        6    NSEC3PARAM RRsets with 2 records (two salts)
  1523390    NODATA (presumably domain uses NSEC rather than NSEC3)

The below distribution of iteration counts (rounded up to the nearest
multiple of 10 for values between 20 and 2500) is largely concentrated
at 1, 5, 8, 10, 20, 40 and 100.  Values <= 150 should work for all key
sizes with a correctly configured resolver, so the vast majority of
domains don't have any trouble with secure denial of existence:

#domains iterations
-------- ------
    115      0
1501953      1
   3513      2
     92      3
     10      4
  58907      5
     21      6
    322      7
 941391      8
     17      9
 194324     10
      5     11
    229     12
     49     13
      2     14
    599     15
   8778     16
      4     17
      1     19
 317662     20
    167     30
  37138     40
   3834     50
      3     60
     13     70
     28     80
      4     90
 528157    100
     32    130
      1    140
    307    150
      3    200
      1    240
     49    250
      1    260
    261    300
     70    330
      3    400
      1    430
     24    500
     24   1600
     12   2500
      1   4096
      1  16384
      2  65535

Of the 453 domains with iteration counts above 150 only 4 have counts
in excess of 2500, which are unsupported by many resolvers with the
default RFC5155 iteration count limits.  The remaining "interesting"
domains are the 449 with iterations in the interval [151,2500].

Of these:

 * 258 have 512-bit P256 (algorithm 13) keys and 300 iterations.  This
   exceeds the RFC5155 iteration limits and breaks secure DoE for many
   resolvers.  All these domains are hosted at "ns1.desec.io".

 * 1 has both a 512-bit P256 key and a 1024-bit RSA key and 250 iterations
   in excess of the limits for either size.

 * 1 has a 1024-bit RSA key and 300 iterations.

 * 7 have 768-bit P384 (algorithm 14) keys and 500 iterations.

 * 2 have P384 keys and 200 iterations.

So, in all, 273 domains are misconfigured with counter-productively high
iteration counts.  So the problem described in the draft exists in the wild,
but is, for the moment at least, quite infrequent.  The vast majority of
domains use sensibly low counts (with 1 being the most popular value, though
frankly 0 would have done just as well, but is perhaps not as well understood).

With a bit of luck, better documentation and tools that warn users to
not exceed 150 (regardless of key size) will keep the problem largely
in check.

I still think there's a lesson here for protocol design, quoting the draft:

  A simple design would have constrained the iteration count only by
  the bit width of the iteration count field (perhaps 12 bits for up
  4096 iterations), with all representable values supported by both
  signers and resolvers.

At this time, I would say that just 7bits for the iteration count
would have been plenty.  Few users want to hide their content against
off-line dictionary attacks so badly, that they are willing to pay the
cost of more than 100 iterations, and most are happy with 1, 10 or 20.

So an update to RFC5155 that sets a flat iteration limit of 127 and
reserves the leading 9 bits of the iteration count would IMHO be a
good idea.

In any case, protocols with integral fields where only a subset of the
values is supported, and the supported set depends on other parameters
is a design feature that should be avoided.

-- 
        Viktor.

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to