Nick Coghlan <ncogh...@gmail.com> added the comment:

Maciej, please read http://mjg59.dreamwidth.org/13061.html

"Secure" vs "not secure" is not a binary state - it's about making attacks 
progressively more difficult. Something that is secure against a casual script 
kiddie scatter gunning attacks on various sites with an automated script won't 
stand up to a systematic attack from a motivated attacker (also see the 
reporting on Flame and Stuxnet for what a *really* motivated and well resourced 
attacker can achieve).

The hash randomisation changes didn't make Python completely secure against 
hashing DoS attacks - it just made them harder, by requiring attackers to 
figure out the hashing seed for the currently running process first. It's 
protecting against scatter gun attacks, not targeted ones.

Performing a timing attack on Python's default short-circuiting comparison 
operation is *relatively easy* because the timing variations can be so large 
(send strings of increasing length until the time stops increasing - now you 
know the target digest length. Then try various initial characters until the 
time starts increasing again - now you know the first character. Repeat for the 
last character, then start with the second character and work through the 
string. Now you have the target hash, which you can try to crack offline at 
your leisure.

The new comparison function is designed to significantly *reduce* the variance, 
thus leaking *less* information about the target hash, and making the attack 
*harder* (albeit, probably still not impossible).

I agree with Christian's last two suggestions: change the name to 
total_compare, and only allow use on byte sequences (where the integer values 
are certain to be cached).

Nothing should ever be called "safe" or "secure" in the standard library, 
because the immediate question from anyone that knows what they're talking 
about is "Secure/safe against what level of threat and what kind of threat"? 
People that *don't* know what they're doing will think "Secure/safe against 
everything" and that's dangerously misleading.

Improving protection against remote timing attacks (e.g. by reducing comparison 
timing variance to the point where it is small relative to network timing 
variance) is a *lot* easier than protecting against local timing attacks. 
Protecting against someone with physical access to the machine hosting the 
target hash is harder still.

Regardless, the target needs to be *improving the status quo*.

Being able to tell people "using hmac.total_compare will make you less 
vulnerable to timing attacks than using ordinary short circuiting comparisons" 
is a *good thing*. We just need to be careful not to oversell it as making you 
*immune* to timing attacks.

----------
nosy: +ncoghlan

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue15061>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to