Eric

One of the reasons we've published 8624 was to offer usage recommendations,
and especially this table:

https://tools.ietf.org/html/rfc8624#page-5

I believe I saw that one of the authors mentioned earlier they are looking
to
do a -bis update, to update this table.

(But hear the rest of the message clearly)

tim



On Thu, Jun 18, 2020 at 9:42 AM Eric Rescorla <e...@rtfm.com> wrote:

>
>
> On Wed, Jun 17, 2020 at 8:51 PM Martin Thomson <m...@lowentropy.net> wrote:
>
>> I agree with Olafur on this.  The reason we standardize is so that we can
>> have a single - ideally very small - set of algorithms that are widely
>> implemented.  Because you want every one of those algorithms in every
>> implementation.
>>
>> In a system like the DNS, you can't really limit the people who might
>> need to consume your signature, so the set of acceptable signing algorithms
>> needs to be small.  Ideally you have just two: one that is established, and
>> one that is new; or one using one technique and a backup using a different
>> technique.
>>
>> TLS has mostly gotten this part right.  We're much closer to the point of
>> having just two in TLS 1.3.  There are a few algorithms that exist to
>> address narrow application domains (IoT, *cough*), but at least you can
>> make a case for TLS deployments in a closed environment.  For that case,
>> TLS allows for codepoint allocation, but avoids an IETF recommendation for
>> those algorithms.  I don't think that DNS needs that same capability;
>> deciding based on whether algorithms are good for global system is the only
>> relevant criterion.
>>
>> If we all agree that GOST is superior to RSA (it probably is) and EdDSA
>> (I doubt it, but I don't have an opinion), then adoption to replace an
>> existing algorithm would be fine.  That didn't happen last time, so that
>> suggests it would be better for RFC 5933 to be deprecated entirely.
>>
>
> largely concur with MT and Olafur.
>
> At a high level, additional algorithms create additional complexity
> and interoperability issues. In some cases there are good reasons for
> that (for instance, one algorithm is much faster than another),
> however that does not appear to be the case here. In the past we were
> often quite liberal about standardizing national algorithms but I
> believe this was a mistake which created confusion about what
> algorithms the IETF actually was encouraging people to use. In
> addition to the factors that Martin notes, it created pressure on
> implementations to add those algorithms.
>
> I don't see any good argument for the IETF recommending that people
> adopt this algorithm, which does not seem to be in any way clearly
> superior to EdDSA and which would open the door to us recommending a
> proliferation of other national algorithms which also don't seem to
> have any real technical advantages. As MT says, the argument for
> assigning a code point while not recommending the algorithm seems
> weaker here because you want DNSSEC signed data to be universally
> verifiable.
>
> My preference would be to not publish this at all, but if it is to
> be published, do so in a way that makes clear that the IETF is just
> allocating the code point and does not recommend it.
>
> -Ekr
>
>>
>> _______________________________________________
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to