On Tue, Aug 15, 2017 at 1:55 PM, Hubert Kario <hka...@redhat.com> wrote: > On Tuesday, 15 August 2017 00:55:50 CEST Colm MacCárthaigh wrote: >> On Mon, Aug 14, 2017 at 8:16 PM, Hubert Kario <hka...@redhat.com> wrote: >> > the difference in processing that is equal to just few clock cycles is >> > detectable over network[1] >> >> The post you reference actually says the opposite; "20 CPU cycles is >> probably too small to exploit" > > exactly what we though about cbc padding at the time TLS 1.1 was published...
I'm not going to defend the poor design of TLS1.1 padding, but it does remain unexploitable over real-world networks. The Lucky13 attack that you reference is practical against DTLS, but not TLS. It is worth understanding the nuance, because the differences can help us continue to make TLS more robust and hint where to optimize. The property that has protected has TLS Vs DTLS is non-replayability, so it's important we keep that. >> ... and even today with very low >> latency networks and I/O schedulers it remains very difficult to >> measure that kind of timing difference remotely. > > simply not true[1], you can measure the times to arbitrary precision with any > real world network connection, it will just take more tries, not infinite > tries Surely the Nyquist limits apply? The fundamental resolution of networks is finite. Clock cycles are measured in partial billionths of a second, but even 10Gbit/sec networks use framing (85 byte minimum) in a way that gives you a resolution of around 70 billionths of a second. Nyquist says that to measure a signal you need a sampling resolution twice that of the signal itself ... that's about 2 orders of magnitude of distance to cover in this case. >> But per the post, the >> larger point is that it is prudent to be cautious. > > exactly, unless you can show that the difference is not measurable, under all > conditions, you have to assume that it is. > >> > When you are careful on the application level (which is fairly simple when >> > you just are sending acknowledgement message), the timing will still be >> > leaked. >> There are application-level and tls-implementation-level approaches >> that can prevent the network timing leak. The easiest is to only write >> TLS records during fixed period slots. > > sure it is, it also limits available bandwidth and it will always use that > amount of bandwidth, something which is not always needed Constant-time schemes work by taking the maximum amount of time in every case. This fundamentally reduces the throughput; because small payloads don't get a speed benefit. > we are not concerned if the issue can be workarouded, we want to be sure that > the TLS stack does not undermine application stack work towards constant time > behaviour The TLS stack can take a constant amount of time to encrypt/decrypt a record, regardless of padding length, but it's very difficult to see how it can pass data to/from the application in constant time; besides the approach I outlined, which you don't like. Note that these problems get harder with larger amounts of padding. Today the lack of padding makes passive traffic analysis attacks very easy. It's extremely feasible for an attacker to categorize request and content lengths (e.g. every page on Wikipedia) and figure out what page is user is browsing. That's a practical attack, that definitely works, today, and it's probably the most practical and most serious attack that we do know works. The fix for that attack is padding, and quite large amounts are needed to defeat traffic analysis. But that will make the timing challenges harder. In that context: it's important to remember; so far those timing attacks have not been practical. We don't want to optimize for the wrong problem. -- Colm _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls