On Wed, Nov 23, 2016 at 8:40 PM, Martin Thomson <martin.thom...@gmail.com> wrote:
> On 24 November 2016 at 15:11, Colm MacCárthaigh <c...@allcosts.net> wrote: > > Do you disagree that the three specific example security issues provided > are > > realistic, representative and impactful? If so, what would persuade you > to > > change your mind? > > These are simply variants on "if someone hits you with a stick, they > might hurt you", all flow fairly logically from the premise, namely > that replay is possible (i.e., someone can hit you with a stick). > Prior to TLS1.3, replay is not possible, so the risks are new, but the end-to-end designers may not realize to update their threat model and just what is required. I'd like to spell that out more than what's where at present. The third is interesting, but it's also the most far-fetched of the > lot (a server might read some bytes, which it later won't read, > exposing a timing attack). I need to work on the wording because the nature of the attack must not be clear. It's really simple. If the 0-RTT data primes the cache, then a subsequent request will be fast. If not, it will be slow. If implemented on a CDN for example, the effort required would be nearly trivial for an attacker. Basically: I replay the 0-RTT data, then probe a bunch of candidate resources with regular requests. If one of them loads in 20ms and the others load in 100ms, well now I know which resource the 0-RTT data was for. I can perform this attack against CDN nodes that are quiet, or remote, and very unlikely to have the resources cached to begin with. > But that's also corollary material; albeit > less obvious. Like I said, I've no objection to expanding a little > bit on what is possible: all non-idempotent activity, which might be > logging, load, and some things that are potentially observable on > subsequent requests, like IO/CPU cache state that might be affected by > a request. > ok, cool :) > >> I'm of the belief that end-to-end > >> replay is a property we should be building in to protocols, not just > >> something a transport layer does for you. On the web, that's what > >> happens, and it contributes greatly to overall reliability. > > > > The proposal here I think promotes that view; if anything, it nudges > > protocols/applications to actually support end-to-end replay. > > You are encouraging the TLS stack to do this, if not the immediate > thing that drives it (in our case, that would be the HTTP stack). If > the point is to make a statement about the importance of the > end-to-end principle with respect to application reliability, the TLS > spec isn't where I'd go for that. > I'm not sure where the folks designing the HTTP and other protocols would get the information from if not the TLS spec. It is TLS that's changing too. Hardly any harm in duplicating the advice anyway. > > I think there is a far worse externalization if we don't do this. > Consider > > the operations who choose not (or don't know) to add good replay > protection. > > They will iterate more quickly and more cheaply than the diligent > providers > > who are cautious to add the effective measures, which are expensive and > > difficult to get right. > > OK let's ask a different question: who is going to do this? > I am, for one. I don't see 0-RTT as a "cheap" feature. It's a very very expensive one. To mitigate the kind of issues it brings really requires atomic transactions. Either the application needs them, or something below it does. So far I see fundamentally no "out" of that, and once we have atomic transactions then we either have some kind of multi-phase commit protocol, distributed consensus, routing of data to master nodes, or some combination thereof. The smartest people I've ever met work on these kinds of systems and they all say it's really really hard and subtle. So when I imagine Zero-RTT being done correctly, I imagine organizations signing up for all of that, and it being worth it, because latency matters that much. That's a totally valid decision. And in that context, the additional expense of intentionally replaying 0-RTT seems minor and modest. My own tentative plan is to do it at the implementation level; to have the TLS library occasionally spoof 0-RTT data sections towards the application. This is the same technique used for validating non-replayable request-level auth. I don't see browsers doing anything like what you request; nor do I > see tools/libs like curl or wget doing it either. If I'm wrong and > they do, they believe in predictability so won't add line noise > without an application asking for it. > I hope this isn't the case, but if it is and browsers generally agree that it would be unimplementable and impractical, then I think 0-RTT should be removed unless we can come up with other effective mitigations. Otherwise it's predictable that we'll see deployments that don't bother to solve the hard problems and are vulnerable to the issues I've described. But let's not be doom and gloom about it, there's got to be a way to mitigate. -- Colm
_______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls