David Benjamin writes:
> No more heavily parameterized algorithms. Please precompose them.

https://cr.yp.to/papers.html#coolnacl explains advantages of providing
precomposed parameter-free bundles to the application. The current
discussions are about specific proposals for such bundles (or at least
KEM bundles to wrap inside bigger bundles such as HTTPS). I don't see
anyone claiming that precomposition has disadvantages.

However, it's important to keep in mind that proposed bundles are
_internally_ choosing parameters for more general algorithms. This
generality can be tremendously helpful for implementation, testing,
security review, and verification. Judging which parameters are useful
_inside_ designs is an engineering question, and shouldn't be conflated
with the question of what's useful to expose to applications.

Consider, e.g., modular inversion, with the modulus as a parameter.
There's now fully verified fast constant-time software for this---quite
a change from, e.g.,

   
https://www.forbes.com/sites/daveywinder/2019/06/12/warning-windows-10-crypto-vulnerability-outed-by-google-researcher-before-microsoft-can-fix-it/

---and almost all of the effort that went into this was shared across
moduli. For applications that use prime moduli and care more about code
conciseness than speed, Fermat inversion is better, and again the tools
are shared across moduli.

Applications using X25519 should be simply using the DH functions
without caring what happens at lower layers, and inside the DH functions
I think it's good to have an invert-mod-2^255-19 layer to abstract away
the choice of modular-inversion algorithm---but in the end the algorithm
is there, and the modulus parameter is clearly useful. Hiding it from
the application doesn't mean eliminating it!

Should there be further parameters to allow not just inversion mod p or
inversion mod m but, e.g., inversion mod f(x) mod p? That's many more
parameters for the degree and coefficients of the polynomial f, but this
enables support for many post-quantum algorithms. One sees different
decisions being made as to how much of this generality is useful:

   * Inversion libraries typically cover only one case or the other (or
     are even more specialized, at some expense in overall code size;
     see the examples in https://cr.yp.to/papers.html#pqcomplexity).

   * https://cr.yp.to/papers.html#safegcd covers the integer and
     polynomial cases, while commenting on the analogies.

   * Number theorists talking about "local fields" are using an
     abstraction that covers both cases simultaneously.

The benefits of generality need to be weighed against the costs. I don't
think most readers of https://cr.yp.to/papers.html#safegcd would have
appreciated having the paper phrased in terms of arbitrary local fields.

When I read "no more heavily parameterized algorithms", I see "no more"
as saying something absolute but "heavily" as missing quantification, so
I don't know how to evaluate what this means for any concrete example.
Meanwhile this is understating the application goal of _zero_ parameters
("I just want a secure connection, don't ask me to make choices").

> Once you precompose them, you may as well take advantage of properties
> of the inputs and optimize things.

In my implementor's hat, I partially agree. Knowing the context often
enables speedups that aren't available in more generality, as long as
the implementation isn't factored into context-independent pieces.

However, at the design stage, speedups have to be weighed against other
considerations, such as overloading security reviewers and introducing
unnecessary traps into the ecosystem.

The particular speedup we're talking about here is eliminating hashing a
kilobyte or two of data. In context, this speedup is negligible: about
1% of the cost of communicating that data in the first place, never mind
other application costs.

There's a 20-page paper saying that this tiny speedup is safe _for
Kyber_. Why exactly do we want security reviewers spending time on this?
And why should this be sitting around in the ecosystem, given the clear
risk of someone plugging in something other than Kyber?

These two issues are cleanly separated for sntrup761: it's easy to check
that sntrup761 internally hashes public keys and ciphertexts, but this
doesn't answer the question of what happens if someone plugs in, say,
mceliece6688128. Meanwhile the argument that the speedup is negligible
applies equally to whichever KEM people plug in: the cost of hashing the
public key is negligible next to the cost of communicating the key; same
for ciphertexts.

---D. J. Bernstein

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to