On Thu, Jan 4, 2018 at 4:17 AM, Hubert Kario <hka...@redhat.com> wrote:

> > No, I strongly disagree here. Firstly, frustrating attackers is a good
> > definition of what the goal of security is. Some times increasing costs
> for
> > attackers does come at the cost of making things harder to analyze or
> debug,
> > but we shouldn't make the latter easier at the expense of the former.
>
> No, the goal of security is to stop attacks from being successful, not
> make them harder. Making attack harder is security through obscurity.
> Something that definitely doesn't work for open source software.
>

Unless you're shipping one-time-pads around, cryptography is founded on
making successful attacks highly improbable, but not impossible. There are
measures of likelihood of key and plaintext recovery for all of the
established algorithms. The delay approach is no different, and risk can be
expressed in mathematical ways.  The numbers are lower, for sure, delays
can add a security factor of maybe up to 2^40, but that's still very very
effective and unlike encryption or hashes, do not have to withstand
longterm attacks.

This bears repeating: attempting to make OpenSSL rigorously constant time
made it *less* secure. The LuckyMinus20 bug was much worse than the Lucky13
bug the code was trying to fix. It would have been better to leave it
un-patched (at least for TLS, maybe not DTLS). A delay in the error case on
the other hand, would have made either issue un-exploitable in the real
world. Evaluating that trade-off takes a lots of "grey area" analysis
though; one has to have a sense of judgement for how much risk a complex
code change is "worth", being mindful that complex code changes come with
their own risks.

honestly, I consider this approach completely misguided. If you are OK with
> tying up a socket for 30 seconds, simply start a timer once you get the
> original client hello (or the first message of second flight, in TLS 1.2),
> close the socket if the handshake is not successful in 30 seconds. In case
> of errors, send nothing, let it timeout. The only reason why this approach
> to constant time error handling is not used is because most people are not
> ok with tying up resources for so long.
>

This is real code we use in production; thankfully errors are very
uncommon, but connections also cost very little, in part due to work done
for DDOS and trickle attacks, a different kind of security problem.

Delaying to a fixed interval is a great approach, and emulates how clocking
protects hardware implementations, but I haven't yet been able to succeed
in making it reliable. It's easy to start a timer when the connection is
accepted and to trigger the error 30 seconds after that, but it's hard to
rule out that a leaky timing side-channel may influence the subsequent
timing of the interrupt or scheduler systems and hence exactly when the
trigger happens. If it does influence it, then a relatively clear signal
shows up again, just offset by 30 seconds, which is no use.

-- 
Colm
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to