On Fri, 26 Feb 2010, Thierry Moreau wrote:
Cryptanalyses is a function of time (and money). If you reduce the usable
time for attackers, their spending goes up or they will not have enough
time
to break the key before it is retired. The recommended key size and time
in the document reflects current cryptographers extremely conservative
estimate of what is deemed safe by a few orders of magnitudes.
Keep in mind that some recommendations (I believe NIST is one) does not
differentiate between key usage when they recommend key sizes. For
encryption,
there is a much larger required forward secrecy requirement. For signing
keys, there is none. One could publish the private TTL time after rollover.
Either of two things:
(A) Some threat/vulnerability analysis is assumed behind a DNSOP-type
recommendation, and then *I* claim that 1024 RSA modulus cycling every month
is really an overkill. (I have unpublished written material supporting this
view.)
Cryptographic overkill is not very harmful as long as the queries still
mostly fit snuggly in most UDP packets.
- or -
(B) Some authority (you referred to NIST which seems to refer to
academic-community factorization exploits, wherein the academic community
declare themselves unable to digress about "activities that take or may take
place behind closed doors, such as at government laboratories" [1]) decided
that 1024 was sufficient. There is no basis other than blind faith in the
authority reputation for selecting 1024 and not e.g. 3600 for a given field
of use.
It's far from blind faith. The article *you* quote is pretty clear:
"At this point in time a brute-force attack agaisnt 1024-bit RSA
would require about two years on a few million compute cores with
many tens of gigabytes of memory per processor or mainboard."
"open community" effort of a 1024-bit RSA factorization is currently out of
reach"
"factoring breakthroughs have not occured for several decades, and
polynomial time factoring on an actual quantum computer still seems to
be several decades away"
It concludes with:
"an open community effort that would factor a 1024-bit RSA modules cannot
be expected by the year 2015"
You might be right that the NSA worries about the safety margin for 1024
bit RSA based on information unknown to us, and that's why NIST is
recommending to phase out 1024 bit RSA. But it is a giant leap from
"two years on a few million cores" to "one month".
Basically, you adhere to (B) and suggest 1024-bits/1-month-cryptoperiod,
hence you inflate the requirements over NIST's.
I am not inflating NIST's requirements. I believe 1024 bit RSA with monthly
rollover is fine, whereas NIST recommends to migrate to 2048 bit for that.
Thus, in *my* opinion, you induce a waste of DNSSEC bandwidth / CPU time /
DNS operations overhead (i.e. rollover management).
Have you crunched the numbers and packet sizes of 768 bit RSA vs 1024 bit
RSA vs 2048 bit RSA RRSIG's in common DNS packet answers? I believe the
concensus reached was that differences between 1024 and 2048 bit had a
significant impact, whereas the difference between 1024 and 768 did not.
It thus made sense to both play as save as possible within the constraints
of DNS, and 1024 was recommended with a one month rollover. It was a combined
effort of cryptanalysts and network engineers.
Paul
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop