On Fri, Mar 18, 2016 at 1:57 AM, Peter Gutmann
<pgut...@cs.auckland.ac.nz> wrote:
> Watson Ladd <watsonbl...@gmail.com> writes:
>
>>As written supporting this draft requires adopting the encrypt-then-MAC
>>extension. But there already is a widely implemented secure way to use MACs
>>in TLS: AES-GCM.
>
> This is there as an option if you want it.  Since it offers no length hiding,
> it's completely unacceptable to some users, for example one protocol uses TLS
> to communicate monitoring commands to remote gear, they're very short and
> fixed-length, different for each command, so if you use GCM you may as well be
> sending plaintext.  In addition GCM is incredibly brittle, get the IV handling
> wrong and you get a complete, catastrophic loss of both integrity and
> confidentiality.  The worst that happens with CBC, even with a complete abuse
> like using an all-zero IV, is that you drop back to ECB mode.

Then use a padding extension that solves all problems, instead of
relying on a side effect of CBC mode. Why do we want this to look
different from TLS, instead of a subset of widely deployed things ala
UTA?

Furthermore, GCM only requires adding a counter for each packet, which
you have to do anyway. If you want, specify GCM-SIV.
>
>>Likewise, this draft modifies the way the master secret is computed, despite
>>a widely implemented different solution to the problem, namely the EMS triple
>>handshake fix.
>
> Firstly, that solves an entirely different problem, and secondly I don't
> recall ever seeing EMS support in any embedded device, it may be widely
> implemented in Windows and OpenSSL but I don't know how much further it goes.

What is your master secret change solving? I don't see how EMS is
solving an entirely different problem, but maybe that's because I
don't understand what your change is solving.

>
>>The use of uncompressed points makes off-curve attacks much easier than with
>>compressed points.
>
> Everything uses uncompressed points at the moment without any problems, and
> compressed points are patented.

At RWC 2016 one of the presentations decrypted connections to a server
by exploiting off-curve point attacks. Your draft claims that
verifying signatures before sending will address an ECC security
threat. I don't see what threat that addresses, and there is a much
worse one that is left unaddressed, namely off-curve point attacks. We
should mandate no reuse of ephemerals to help with that.

>
>>The analysis of TLS 1.3 is just wrong. TLS 1.3 has been far more extensively
>>analyzed then TLS 1.2.
>
> As the rationale points out, the mechanisms in SSL were also very heavily
> analysed when they were released.  It didn't protect the protocol from 20
> years of subsequent attacks, which we've leared about over those 20 years of
> implementation and deployment experience.  With TLS 1.3 we have zero
> implementation and deployment experience.  Do you really believe there will
> never be any attacks on it after it's rolled out?

Deployment does not affect the protocol's amenability to analysis, or
the degree of analysis it should receive. The actual history of TLS
1.2 and earlier analysis shows that many of the vaunted "attacks" are
the result of old publications being dusted off, due to the anemic
response of the security industry to the attacks, and that there's an
actual body of knowledge

The use of predictable IVs in TLS 1.0 was first commented on by
Rogaway in 1995. (I'm hunting down the source, but this is from a
presentation of Patterson) That should have immediately resulted in a
protocol change but didn't. Did it really take 20 years to realize
this was a problem? Or was it 20 years of ignoring the problem until
exploitation?

In 1998 Bleichenbacher noted that PKCS 1.5 encryption is dangerous.
TLS could use much better simpler designs which hash the ciphertext
and plaintext together, but didn't. Today we still haven't killed a
nearly 20 year old attack.

That TLS doesn't sign enough when using DH was an observation made in
2004. Sadly I don't recall who did it. It wasn't fixed over two
revisions, and culminates in Logjam. Did this require deployment to be
observed?

Lastly, it's the primitives but the protocol that is the proper unit
of analysis. With TLS 1.3 we have a symbolic model and have a
considerable amount of examination of the cryptographic security. That
should give far more confidence than 20 years of being run: computers
don't analyse the protocols they run.

Sincerely,
Watson
>
> Peter.



-- 
"Man is born free, but everywhere he is in chains".
--Rousseau.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to