We allow for new thumbprint algorithms to be defined and used with this spec.
I think that we all agree that is a good thing.

The question is if we should define them here or as part of JWT/CWT based on 
broader demand.

Including them in this document may be a distraction in my opinion.   There is 
no attack against SHA256 with a short duration token/key (days) that is better 
solved by using a long duration token/key (years) with a longer hash.

That said it woiulden't like me.  I just think it will distract people in the 
wrong direction.

John B.

> On Apr 30, 2018, at 7:23 PM, Neil Madden <neil.mad...@forgerock.com> wrote:
> 
> Responses inline again. 
> 
> On Mon, 30 Apr 2018 at 19:44, John Bradley <ve7...@ve7jtb.com 
> <mailto:ve7...@ve7jtb.com>> wrote:
> Inline.
> 
> 
>> On Apr 30, 2018, at 12:57 PM, Neil Madden <neil.mad...@forgerock.com 
>> <mailto:neil.mad...@forgerock.com>> wrote:
>> 
>> Hi John,
>> 
>>> On 30 Apr 2018, at 15:07, John Bradley <ve7...@ve7jtb.com 
>>> <mailto:ve7...@ve7jtb.com>> wrote:
>>> 
>>> I lean towards letting new certificate thumbprints be defined someplace 
>>> else.
>>> 
>>> With SHA256, it is really second preimage resistance that we care about for 
>>> a certificate thumbprint, rather than simple collision resistance.  
>> 
>> That’s not true if you consider a malicious client. If I can find any pair 
>> of certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can 
>> present c1 to the AS when I request an access token and later present c2 to 
>> the protected resource when I use it. I don’t know if there is an actual 
>> practical attack based on this, but a successful attack would violate the 
>> security goal implied by the draft: that that requests made to the protected 
>> resource "MUST be made […] using the same certificate that was used for 
>> mutual TLS at the token endpoint.”
>> 
>> NB: this is obviously easier if the client gets to choose its own client_id, 
>> as it can find the colliding certificates and then sign up with whatever 
>> subject ended up in c1.
>> 
> 
> Both C1 and C2 need to be valid certificates, so not just any collision will 
> do.  
> 
> That doesn’t help much. There’s still enough you can vary in a certificate to 
> generate collisions. 
> 
> If the client produces C1 and C2 and has the private keys for them, I have a 
> hard time seeing what advantage it could get by having colliding certificate 
> hashes.
> 
> Me too. But if the security goal is proof of possession, then this attack 
> (assuming practical collisions) would break that goal. 
> 
> 
> If the AS is trusting a CA, the attacker producing a certificate that matches 
> the hash of another certificate so that it seems like the fake certificate 
> was issued by the CA, is an attack that worked on MD5 given some 
> predictability.  That is why we now have entropy requirements for certificate 
> serial numbers, that reduce known prefix attacks.
> 
> The draft allows for self-signed certificates. 
> 
> Second-preimage Resistance is being computationaly infusible to find a second 
> preimage that has the same output as the first preimage.   The second 
> preimage strength for SHA256 is 201-256bits and collision resistance strength 
> is 128 bits.  See Appendix A of 
> https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf
>  
> <https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf>
>  if you want to understand the relationship between message length and second 
> Preimage resistance.
> 
> RFC 4270 is old but still has some relevant info. 
> https://tools.ietf.org/html/rfc4270 <https://tools.ietf.org/html/rfc4270>
> 
> Think of the confirmation method as the out of band integrity check for the 
> certificate that is presented in the TLS session.
> 
> This is all largely irrelevant. 
> 
>>> MD5 failed quite badly with chosen prefix collision attacks against 
>>> certificates (Thanks to some X.509 extensions).
>>> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
>>> (http://shattered.io <http://shattered.io/>)
>>> 
>>> The reason NIST pushed for development of SHA3 was concern that a preimage 
>>> attack might eventually be found agains the SHA2 family of hash algorithms. 
>>> 
>>> While SHA512 may have double the number of bytes it may not help much 
>>> against a SHA2 preimage attack,. (Some papers  suggest that the double word 
>>> size of SHA512 it may be more vulnerable than SHA256 to some attacks)
>> 
>> This is really something where the input of a cryptographer would be 
>> welcome. As far as I am aware, the collision resistance of SHA-256 is still 
>> considered at around the 128-bit level, while it is considered at around the 
>> 256-bit level for SHA-512. Absent a total break of SHA2, it is likely that 
>> SHA-512 will remain at a higher security level than SHA-256 even if both are 
>> weakened by cryptanalytic advances. They are based on the same algorithm, 
>> with different parameters and word/block sizes.
>> 
> 
> SHA512 uses double words and more rounds, true.  It also has more rounds 
> broken by known attacks than SHA256 https://en.wikipedia.org/wiki/SHA-2 
> <https://en.wikipedia.org/wiki/SHA-2>.. So it is slightly more complex than 
> doubling the output size doubles the strength.
> 
> SHA-512 also has more rounds (80) than SHA-256 (64), so still has more rounds 
> left to go...
> 
> 
>>> 
>>> It is currently believed that SHA256 has 256 bits of second preimage 
>>> strength.   That could always turn out to be wrong as SHA2 has some 
>>> similarities to SHA1, and yes post quantum that is reduced to 128bits. 
>>> 
>>> To have a safe future option we would probably want to go with SHA3-512.   
>>> However I don’t see that getting much traction in the near term.  
>> 
>> SHA3 is also slower than SHA2 in software.
> 
> Yes roughly half the speed in software but generally faster in hardware.  
> 
> I am not necessarily arguing for SHA3, rather I think this issue is larger 
> than this spec and selecting alternate hashing algorithms for security should 
> be separate from this spec.
> 
> I am for agility, but I don’t want to accidentally have people doing 
> something that is just theatre.
> 
> Rotating certificates, and having the lifetime of a certificates validity is 
> as useful as doubling the hash size. 
> 
> Why not allow both? 
> 
> 
> I don’t think the confirmation hash length is the weakest link.
> 
> Shouldn’t we allow all the parts to be as strong as possible?
> 
> 
> John B.
> 
>> 
>>> 
>>> Practical things people should do run more along the lines of:
>>> 1: Put at least 64 bits of entropy into the certificate serial number if 
>>> using self signed or a local CA.  Commercial CA need to do that now.
>>> 2: Rotate certificates on a regular basis,  using a registered JWKS URI
>>> 
>>> My concern is that people will see a bigger number and decide it is better 
>>> if we define it in the spec.  
>>> We may be getting people to do additional work and increasing token size 
>>> without a good reason by putting it in the spec directly.
>> 
>> I’m not sure why this is a concern. As previously pointed out, SHA-512 is 
>> often *faster* than SHA-256, and an extra 32 bytes doesn’t seem worth 
>> worrying about.
>> 
>> [snip]
>> 
>> — Neil
> — Neil

_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to