On Thursday, 4 January 2018 20:01:03 CET Colm MacCárthaigh wrote:
> On Thu, Jan 4, 2018 at 4:17 AM, Hubert Kario <hka...@redhat.com> wrote:
> > > No, I strongly disagree here. Firstly, frustrating attackers is a good
> > > definition of what the goal of security is. Some times increasing costs
> > 
> > for
> > 
> > > attackers does come at the cost of making things harder to analyze or
> > 
> > debug,
> > 
> > > but we shouldn't make the latter easier at the expense of the former.
> > 
> > No, the goal of security is to stop attacks from being successful, not
> > make them harder. Making attack harder is security through obscurity.
> > Something that definitely doesn't work for open source software.
> 
> Unless you're shipping one-time-pads around, cryptography is founded on
> making successful attacks highly improbable, but not impossible. There are
> measures of likelihood of key and plaintext recovery for all of the
> established algorithms. The delay approach is no different, and risk can be
> expressed in mathematical ways.  The numbers are lower, for sure, delays
> can add a security factor of maybe up to 2^40, but that's still very very
> effective and unlike encryption or hashes, do not have to withstand
> longterm attacks.

except that what we call "sufficiently hard plaintext recovery" is over triple 
of the security margin you're proposing as a workaround here

2^40 is doable on a smartphone, now
2^120 is not doable on a supercomputer, and won't be for a very long time
 
> This bears repeating: attempting to make OpenSSL rigorously constant time
> made it *less* secure.

yes, on one specific hardware type, because of a bug in implementation

I really hope you're not suggesting "we shouldn't ever build bridges because 
this one collapsed"...

also, for how long was it *less* secure? and for how long was it vulnerable to 
Lucky13?

> The LuckyMinus20 bug was much worse than the Lucky13
> bug the code was trying to fix. It would have been better to leave it
> un-patched (at least for TLS, maybe not DTLS). A delay in the error case on
> the other hand, would have made either issue un-exploitable in the real
> world. 

I'm sorry, you're saying that you're able to prove a negative?
Or to put it other way: "Anyone can invent a security system that he himself 
cannot break."

> Evaluating that trade-off takes a lots of "grey area" analysis
> though; one has to have a sense of judgement for how much risk a complex
> code change is "worth", being mindful that complex code changes come with
> their own risks.

"we shouldn't fix it because it's not completely broken" is how we get 
million+ strong botnets of systems that never got updated

and any issue with known exploit needs to be fixed
 
> > honestly, I consider this approach completely misguided. If you are OK
> > with
> > tying up a socket for 30 seconds, simply start a timer once you get the
> > original client hello (or the first message of second flight, in TLS 1.2),
> > close the socket if the handshake is not successful in 30 seconds. In case
> > of errors, send nothing, let it timeout. The only reason why this approach
> > to constant time error handling is not used is because most people are not
> > ok with tying up resources for so long.
> 
> This is real code we use in production; thankfully errors are very
> uncommon, but connections also cost very little, in part due to work done
> for DDOS and trickle attacks, a different kind of security problem.
> 
> Delaying to a fixed interval is a great approach, and emulates how clocking
> protects hardware implementations, but I haven't yet been able to succeed
> in making it reliable. It's easy to start a timer when the connection is
> accepted and to trigger the error 30 seconds after that, but it's hard to
> rule out that a leaky timing side-channel may influence the subsequent
> timing of the interrupt or scheduler systems and hence exactly when the
> trigger happens. If it does influence it, then a relatively clear signal
> shows up again, just offset by 30 seconds, which is no use.

*if*

in other words, this solution _may_ leak information (something which you can 
actually test), or the other solution that _does_ leak information, just 
slowly so it's "acceptable risk"
-- 
Regards,
Hubert Kario
Senior Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 115, 612 00  Brno, Czech Republic

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to