On Sun, Jun 11, 2017 at 8:18 AM, Eric Rescorla <e...@rtfm.com> wrote:
> Here's what I propose to do: > > - Describe the attacks that Colm described. > > - Distinguish between replay and retransmission > > - Mandate (SHOULD-level) that servers do some sort of bounded > (at-most-N times) anti-replay mechanism and emphasize that > implementations that forbid replays entirely (only allowing > retransmission) are superior. > > - Describe the stateless mechanism as a recommended behavior but not > as a substitute for the other mechanisms. As Martin Thomson has > pointed out, it's a nice pre-filter for either of these other > mechanisms. > > - Clarify the behavior you need to get PFS. > > - Require (MUST) that clients only send and servers only accept "safe" > requests in 0-RTT, but allow implementations to determine what is > safe. > > Note: there's been a lot of debate about exactly where this stuff > should go in the document and how it should be phrased. I think these > are editorial questions and so largely my discretion. > First of all, thanks for doing this, that all sounds great! The TLS spec is obviously monumentally important to the internet, and years of hard work aren't made easier by late-coming changes, that shouldn't be thankless. > Here's what I do not intend to do. > > - Mandate (MUST-level) any anti-replay mechanism. I do not believe > there is any WG consensus for this. > The one case here where I'd really argue for a "MUST" is middle-boxes like CDNs. The concern I have is if someone has an application that uses throttling or is vulnerable to a resource-exhaustion problem and goes and puts a CDN in front of it, it's not obvious that enabling TLS 1.3 could open them to a new kind of DOS attack. We've already seen CDNs enable TLS 1.3 with unintentionally broken 0-RTT mitigations, so that's clear evidence that the existing guidance isn't sufficient. I think it would help manage the interoperability risks if we can point out to their customers that the configuration is unambiguously broken. Or at least, it helps to flag it as a security issue, which makes it more likely to get fixed. Absent this, the operators of "backend" applications would have to live with risk that is created by the upstream CDN providers for their own convenience. That seems like a really bad interoperability set up. I'd argue for at-most-once protection here, since that's the only way a client can make deterministic decisions, and it's also easier to audit and GREASE. But there doesn't seem to be consensus around that. At the moment, I feel that's a bit like the lack of consensus the "Clean coal" industry has on global warming though, because it seems to be an argument rooted in operational convenience rather than actual security. This is not the standard we apply in other cases; we wouldn't listen to those who say that it's ok to keep RC4 or MD5 in the standard because the problems are small and the operational performance benefit is worth it. My spidey-sense is that these attacks will get better and more refined over time. Nevertheless, /some/ guaranteed replay protection would be better than none. particularly in this case. So if it's at-most-N, and N is small enough to at least avoid many throttling cases, that's something worth taking, even though it does leave open the easier cache-analysis attacks. - Design a mechanism to allow the server to tell the client that it > either (a) enforces strong anti-replay or (b) deletes PSKs after > first use. Either of these seem like OK ideas, but they can be added > to NST as extensions at some future time, and I haven't seen a lot > of evidence that existing clients would consume these. > This can happen totally outside of the protocol too; as-in an operator can advertise it as a feature. Likely most useful for the forward secrecy case. -- Colm
_______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls