On Wed, Jan 3, 2018 at 3:45 AM, Hubert Kario <hka...@redhat.com> wrote:
>
> > *Second: hide all alerts in suspicious error cases*
> > Next, when the handshake does fail, we do two non-standard things. The
> > first is that we don't return an alert message, we just close the
> > connection.
> >
> > *Third: mask timing side-channels with a massive delay*
> > The second non-standard thing we do is that in all error cases, s2n
> behaves
> > as if something suspicious is going on and in case timing is involved, we
> > add a random delay. It's well known that random delays are only partially
> > effective against timing attacks, but we add a very very big one. We
> wait a
> > random amount of time between a minimum of 10 seconds, and a maximum of
> 30
> > seconds.
>
> Note that both of those things only _possibly_ frustrate attackers while
> they
> definitely frustrate researchers trying to characterise your
> implementation as
> vulnerable or not. In effect making it seem secure while in reality it may
> not
> be.
>

No, I strongly disagree here. Firstly, frustrating attackers is a good
definition of what the goal of security is. Some times increasing costs for
attackers does come at the cost of making things harder to analyze or
debug, but we shouldn't make the latter easier at the expense of the
former.

In practical terms; it's not that big a deal. For the purposes of research
it's usually easy to remove the delays or masking - I did it myself
recently to test various ROBOT detection scripts; turning off the delay was
a one line code patch and I was up and running, it hardly hindered research
at all.

Similarly, we've already had to reduce the granularity of TLS alerts, so
TLS experts and analysts are used to having to dive into code,
step-throughs, to debug some kinds of problems. I can't how many hours I've
spent walking through why a big opaque blob didn't precisely match another
big opaque blob. We could make all of that easier by logging and alerting
all sorts of intermediate states, but it would be a terrible mistake
because it would leak so much information to attackers.

As for delays possibly making an attackers job harder; I'm working on some
more firm, signal-analysis based, grounding for the approach as the impact
varies depending on the amount of noise present, as well as the original
distribution of measurement due to ordinary scheduling and network jitter,
but the approach certainly makes timing attacks take millions to trillions
more attempts, and can push real-world timing leaks well into unexploitable
in the real world.

-- 
Colm
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to