On Tue, Jan 12, 2016 at 5:12 PM, Peter Gutmann <pgut...@cs.auckland.ac.nz> wrote: > Yoav Nir <ynir.i...@gmail.com> writes: > >>Ignoring for a moment the merits of this proposal vs the TLS 1.3 (or 2.0) >>that this WG is working on right now, why? > > Embedded devices and similar systems with long-term requirements. Most of my > user base is embedded (or non-embedded equivalents, systems that need to run > in a fixed configuration for a very long time after they're deployed). As > I've mentioned in an earlier post, the median point on the bell curve is > probably around TLS 1.0/1.1. These are systems with an expected lifetime of > 10-20 years or more, deployment of new versions moves slowly and carefully so > the less radical changes you need to make the better, and most importantly you > can't roll out patches every month or two when the next attack on TLS is > published.
So for your proposal to solve this problem, it needs to be more likely to be secure then TLS 1.3. Of course, the slow rollout gives time to thoroughly test massive changes, while large numbers of small changes will not be a good idea. > > To expand on this, I'll take Ilari Liusvaara's comments: > >>Bleeding edge ideas? They essentially re-invented SIGMA, which is over 10 >>years old. The basic framework for doing 0-RTT is the obvious one. The only >>new algorithm prsent since TLS 1.2 is HKDF, which is just 5 years old. >> >>So I don't see anything "experimential" ideas, mechanisms or algorithms in >>there > > When SSLv3 was introduced, it also used ideas that were 10-20 years old (DH, > RSA, DES, etc, only SHA-1 was relatively new). They were mature algorithms, > lots of research had been published on them, and yet we're still fixing issues > with them 20 years later (DH = 1976, SSLv3 = 1996, Logjam = 2015). We all understand that the security of a protocol is not a function not of the primitives but of the way the protocol works. The confusion between export and nonexport DH shares was noted almost immediately in SSLv3. Furthermore, 512 bit DH is weak: I don't know how this is a discovery in 2015, given that the reasons for this were all worked out in the early 90's. So no, Logjam is not a result of unknown issues appearing after 20 years, but ignoring known issues. > > TLS 2.0-called-1.3 will roll back the 20 years of experience we have with all > the things that can go wrong and start again from scratch. SIGMA, at ten > years old, is a relative newcomer to DH's 20 years when it was used in SSLv3, > but in either case we didn't discover all the problems with it until after the > protocol that used it was rolled out. We currently have zero implementation > and deployment experience with 2.0-called-1.3 [0], which means we're likely to > have another 10-20 years of patching holes ahead of us. This is what I meant > by "experimental, bleeding-edge". There is an old joke about the resume with one years experience repeated 20 times. All of the problems in TLS have been known for decades, as I've repeatedly demonstrated on this list. All of them were known to cryptographers at the time TLS was being designed and deployed. It does not take deployment to trigger analysis. > > What TLS 1.3-which-is-1.3 should be is an LTS version that's essentially > what's already out there with the bugs fixed, and where you've got a pretty > good chance that you won't be rolling out hotfixes every other month to patch > the newly-discovered-vulnerability of the month (which, in the case of most > things that aren't web browsers and servers, is more or less impossible to > do). TLS 1.3 is being designed in cooperation with people who *actually know cryptography* with computer-based models and theorem provers for the entire protocol. Your proposal isn't, instead patching a collection of known issues. But we already did that in UTA. I don't see why we should believe your proposal solves it better. > > (Which also means that the requirements for it should include explicit "don't > use MD5, don't use keys < 1024 bits, don't precompute DH and reuse the > values", all the other obvious-but-apparently-not-obvious-enough stupid that's > turned up recently). > > TLS 2.0-called-1.3 seems to be doing the same thing that HTTPS 2 did, > targeting the specialised requirements of web servers/browsers and ignoring > everything else. The HTTPS 2 WG's response to this at the time was "let them > eat HTTP 1.1", so that you've now got HTTP-for-Google (2.0) and HTTP-for- > everything-else (1.1). Is the TLS equivalent going to be "let them eat TLS > 1.1"? What features in TLS 1.3, other than 0-RTT (which is optional!) make it unsuitable for embedded devices? > > (General note: That one short post has generated an enormous amount of email > off-list as well as on, for all those waiting for replies to private mail, > please be patient, I'm working through it...). > > Peter. > > [0] This all feels somewhat biblical, "you are 2.0 son of 1.2, but you will be > known as 1.3". > _______________________________________________ > TLS mailing list > TLS@ietf.org > https://www.ietf.org/mailman/listinfo/tls -- "Man is born free, but everywhere he is in chains". --Rousseau. _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls