Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread John Bradley
I lean towards letting new certificate thumbprints be defined someplace
else.

With SHA256, it is really second preimage resistance that we care about for
a certificate thumbprint, rather than simple collision resistance.

MD5 failed quite badly with chosen prefix collision attacks against
certificates (Thanks to some X.509 extensions).
SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack (
http://shattered.io)

The reason NIST pushed for development of SHA3 was concern that a preimage
attack might eventually be found agains the SHA2 family of hash algorithms.

While SHA512 may have double the number of bytes it may not help much
against a SHA2 preimage attack,. (Some papers  suggest that the double word
size of SHA512 it may be more vulnerable than SHA256 to some attacks)

It is currently believed that SHA256 has 256 bits of second preimage
strength.   That could always turn out to be wrong as SHA2 has some
similarities to SHA1, and yes post quantum that is reduced to 128bits.

To have a safe future option we would probably want to go with SHA3-512.
However I don’t see that getting much traction in the near term.

Practical things people should do run more along the lines of:
1: Put at least 64 bits of entropy into the certificate serial number if
using self signed or a local CA.  Commercial CA need to do that now.
2: Rotate certificates on a regular basis,  using a registered JWKS URI

My concern is that people will see a bigger number and decide it is better
if we define it in the spec.
We may be getting people to do additional work and increasing token size
without a good reason by putting it in the spec directly.

I have yet to see any real discussion on using bigger hashes for signing
certificates, or creating thumbprints in other places.

John B.




On Thu, Apr 19, 2018 at 1:23 PM, Brian Campbell 
wrote:

> Okay, so I retract the idea of metadata indicating the hash alg/cnf
> method (based on John pointing out that it doesn't really make sense).
>
> That still leaves the question of whether or not to define additional
> confirmation methods in this document (and if so, what they would be
> though x5t#S384 and x5t#S512 seem the most likely).
>
> There's some reasonable rational for both adding one or two new hash alg
> confirmation methods in the doc now vs. sticking with just SHA256 for
> now. I'll note again that the doc doesn't preclude using or later defining
> other confirmation methods.
>
> I'm kind of on the fence about it, to be honest. But that doesn't really
> matter because the draft should reflect rough WG consensus. So I'm looking
> to get a rough gauge of rough consensus. At this point there's one
> comment out of WGLC asking for additional confirmation method(s). I don't
> think that makes for consensus. But I'd ask others from the WG to chime
> in, if appropriate, to help me better gauge consensus.
>
> On Fri, Apr 13, 2018 at 4:49 AM, Neil Madden 
> wrote:
>
>> I’m not particularly wedded to SHA-512, just that it should be possible
>> to use something else. At the moment, the draft seems pretty wedded to
>> SHA-256. SHA-512 may be overkill, but it is fast (faster than SHA-256 on
>> many 64-bit machines) and provides a very wide security margin against any
>> future crypto advances (quantum or otherwise). I’d also be happy with
>> SHA-384, SHA3-512, Blake2 etc but SHA-512 seems pretty widely available.
>>
>> I don’t think short-lived access tokens is a help if the likelihood is
>> that certs will be reused for many access tokens.
>>
>> Public Web PKI certs tend to only use SHA-256 as it has broad support,
>> and I gather there were some compatibility issues with SHA-512 certs in
>> TLS. There are a handful of SHA-384 certs - e.g., the Comodo CA certs for
>> https://home.cern/ are signed with SHA-384 (although with RSA-2048,
>> which NSA estimates at only ~112-bit security). SHA-512 is used on some
>> internal networks where there is more control over components being used,
>> which is also where people are mostly likely to care about security beyond
>> 128-bit level (eg government internal networks).
>>
>> By the way, I just mentioned quantum attacks as an example of something
>> that might weaken the hash in future. Obviously, quantum attacks completely
>> destroy RSA, ECDSA etc, so SHA-512 would not solve this on its own, but it
>> provides a considerable margin to hedge against future quantum *or
>> classical* advances while allowing the paranoid to pick a stronger security
>> level now. We have customers that ask for 256-bit AES already.
>>
>> (I also misremembered the quantum attack - “Serious Cryptography” by
>> Aumasson tells me the best known quantum attack against collision
>> resistance is O(2^n/3) - so ~2^85 for SHA-256 but also needs O(2^85) space
>> so is impractical. I don’t know if that is the last word though)..
>>
>> As for SHA-1, doesn’t that prove the point? SHA-1 is pretty broken now
>> with practical collisions having been demonstrated. The kind

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread Mike Jones
I agree that this specification should not define new certificate thumbprint 
methods.  They can always be registered by other specifications if needed in 
the future.

   -- Mike

From: OAuth  On Behalf Of John Bradley
Sent: Monday, April 30, 2018 7:07 AM
To: Brian Campbell 
Cc: oauth 
Subject: Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

I lean towards letting new certificate thumbprints be defined someplace else.

With SHA256, it is really second preimage resistance that we care about for a 
certificate thumbprint, rather than simple collision resistance.

MD5 failed quite badly with chosen prefix collision attacks against 
certificates (Thanks to some X.509 extensions).
SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
(http://shattered.io)

The reason NIST pushed for development of SHA3 was concern that a preimage 
attack might eventually be found agains the SHA2 family of hash algorithms.

While SHA512 may have double the number of bytes it may not help much against a 
SHA2 preimage attack,. (Some papers  suggest that the double word size of 
SHA512 it may be more vulnerable than SHA256 to some attacks)

It is currently believed that SHA256 has 256 bits of second preimage strength.  
 That could always turn out to be wrong as SHA2 has some similarities to SHA1, 
and yes post quantum that is reduced to 128bits.

To have a safe future option we would probably want to go with SHA3-512.   
However I don’t see that getting much traction in the near term.

Practical things people should do run more along the lines of:
1: Put at least 64 bits of entropy into the certificate serial number if using 
self signed or a local CA.  Commercial CA need to do that now.
2: Rotate certificates on a regular basis,  using a registered JWKS URI

My concern is that people will see a bigger number and decide it is better if 
we define it in the spec.
We may be getting people to do additional work and increasing token size 
without a good reason by putting it in the spec directly.

I have yet to see any real discussion on using bigger hashes for signing 
certificates, or creating thumbprints in other places.

John B.




On Thu, Apr 19, 2018 at 1:23 PM, Brian Campbell 
mailto:bcampb...@pingidentity.com>> wrote:
Okay, so I retract the idea of metadata indicating the hash alg/cnf method 
(based on John pointing out that it doesn't really make sense).
That still leaves the question of whether or not to define additional 
confirmation methods in this document (and if so, what they would be though 
x5t#S384 and x5t#S512 seem the most likely).
There's some reasonable rational for both adding one or two new hash alg 
confirmation methods in the doc now vs. sticking with just SHA256 for now. I'll 
note again that the doc doesn't preclude using or later defining other 
confirmation methods.
I'm kind of on the fence about it, to be honest. But that doesn't really matter 
because the draft should reflect rough WG consensus. So I'm looking to get a 
rough gauge of rough consensus. At this point there's one comment out of WGLC 
asking for additional confirmation method(s). I don't think that makes for 
consensus. But I'd ask others from the WG to chime in, if appropriate, to help 
me better gauge consensus.

On Fri, Apr 13, 2018 at 4:49 AM, Neil Madden 
mailto:neil.mad...@forgerock.com>> wrote:
I’m not particularly wedded to SHA-512, just that it should be possible to use 
something else. At the moment, the draft seems pretty wedded to SHA-256. 
SHA-512 may be overkill, but it is fast (faster than SHA-256 on many 64-bit 
machines) and provides a very wide security margin against any future crypto 
advances (quantum or otherwise). I’d also be happy with SHA-384, SHA3-512, 
Blake2 etc but SHA-512 seems pretty widely available.

I don’t think short-lived access tokens is a help if the likelihood is that 
certs will be reused for many access tokens.

Public Web PKI certs tend to only use SHA-256 as it has broad support, and I 
gather there were some compatibility issues with SHA-512 certs in TLS. There 
are a handful of SHA-384 certs - e.g., the Comodo CA certs for 
https://home.cern/ are signed with SHA-384 (although with RSA-2048, which NSA 
estimates at only ~112-bit security). SHA-512 is used on some internal networks 
where there is more control over components being used, which is also where 
people are mostly likely to care about security beyond 128-bit level (eg 
government internal networks).

By the way, I just mentioned quantum attacks as an example of something that 
might weaken the hash in future. Obviously, quantum attacks completely destroy 
RSA, ECDSA etc, so SHA-512 would not solve this on its own, but it provides a 
considerable margin to hedge against future quantum *or classical* advances 
while allowing the paranoid to pick a stronger security level now.. We have 
customers that ask for 256-bit AES already.

(I also misremembered the quantum 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread Neil Madden
Hi John,

> On 30 Apr 2018, at 15:07, John Bradley  wrote:
> 
> I lean towards letting new certificate thumbprints be defined someplace else.
> 
> With SHA256, it is really second preimage resistance that we care about for a 
> certificate thumbprint, rather than simple collision resistance.  

That’s not true if you consider a malicious client. If I can find any pair of 
certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can present c1 
to the AS when I request an access token and later present c2 to the protected 
resource when I use it. I don’t know if there is an actual practical attack 
based on this, but a successful attack would violate the security goal implied 
by the draft: that that requests made to the protected resource "MUST be made 
[…] using the same certificate that was used for mutual TLS at the token 
endpoint.”

NB: this is obviously easier if the client gets to choose its own client_id, as 
it can find the colliding certificates and then sign up with whatever subject 
ended up in c1.

> 
> MD5 failed quite badly with chosen prefix collision attacks against 
> certificates (Thanks to some X.509 extensions).
> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
> (http://shattered.io)
> 
> The reason NIST pushed for development of SHA3 was concern that a preimage 
> attack might eventually be found agains the SHA2 family of hash algorithms. 
> 
> While SHA512 may have double the number of bytes it may not help much against 
> a SHA2 preimage attack,. (Some papers  suggest that the double word size of 
> SHA512 it may be more vulnerable than SHA256 to some attacks)

This is really something where the input of a cryptographer would be welcome. 
As far as I am aware, the collision resistance of SHA-256 is still considered 
at around the 128-bit level, while it is considered at around the 256-bit level 
for SHA-512. Absent a total break of SHA2, it is likely that SHA-512 will 
remain at a higher security level than SHA-256 even if both are weakened by 
cryptanalytic advances. They are based on the same algorithm, with different 
parameters and word/block sizes.

> 
> It is currently believed that SHA256 has 256 bits of second preimage 
> strength.   That could always turn out to be wrong as SHA2 has some 
> similarities to SHA1, and yes post quantum that is reduced to 128bits. 
> 
> To have a safe future option we would probably want to go with SHA3-512.   
> However I don’t see that getting much traction in the near term.  

SHA3 is also slower than SHA2 in software.

> 
> Practical things people should do run more along the lines of:
> 1: Put at least 64 bits of entropy into the certificate serial number if 
> using self signed or a local CA.  Commercial CA need to do that now.
> 2: Rotate certificates on a regular basis,  using a registered JWKS URI
> 
> My concern is that people will see a bigger number and decide it is better if 
> we define it in the spec.  
> We may be getting people to do additional work and increasing token size 
> without a good reason by putting it in the spec directly.

I’m not sure why this is a concern. As previously pointed out, SHA-512 is often 
*faster* than SHA-256, and an extra 32 bytes doesn’t seem worth worrying about.

[snip]

— Neil
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread John Bradley
Inline.


> On Apr 30, 2018, at 12:57 PM, Neil Madden  > wrote:
> 
> Hi John,
> 
>> On 30 Apr 2018, at 15:07, John Bradley > > wrote:
>> 
>> I lean towards letting new certificate thumbprints be defined someplace else.
>> 
>> With SHA256, it is really second preimage resistance that we care about for 
>> a certificate thumbprint, rather than simple collision resistance.  
> 
> That’s not true if you consider a malicious client. If I can find any pair of 
> certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can present 
> c1 to the AS when I request an access token and later present c2 to the 
> protected resource when I use it. I don’t know if there is an actual 
> practical attack based on this, but a successful attack would violate the 
> security goal implied by the draft: that that requests made to the protected 
> resource "MUST be made […] using the same certificate that was used for 
> mutual TLS at the token endpoint.”
> 
> NB: this is obviously easier if the client gets to choose its own client_id, 
> as it can find the colliding certificates and then sign up with whatever 
> subject ended up in c1.
> 

Both C1 and C2 need to be valid certificates, so not just any collision will 
do.  
If the client produces C1 and C2 and has the private keys for them, I have a 
hard time seeing what advantage it could get by having colliding certificate 
hashes.

If the AS is trusting a CA, the attacker producing a certificate that matches 
the hash of another certificate so that it seems like the fake certificate was 
issued by the CA, is an attack that worked on MD5 given some predictability.  
That is why we now have entropy requirements for certificate serial numbers, 
that reduce known prefix attacks.

Second-preimage Resistance is being computationaly infusible to find a second 
preimage that has the same output as the first preimage.   The second preimage 
strength for SHA256 is 201-256bits and collision resistance strength is 128 
bits.  See Appendix A of 
https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf 

 if you want to understand the relationship between message length and second 
Preimage resistance.

RFC 4270 is old but still has some relevant info. 
https://tools.ietf.org/html/rfc4270 

Think of the confirmation method as the out of band integrity check for the 
certificate that is presented in the TLS session.




>> 
>> MD5 failed quite badly with chosen prefix collision attacks against 
>> certificates (Thanks to some X.509 extensions).
>> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
>> (http://shattered.io )
>> 
>> The reason NIST pushed for development of SHA3 was concern that a preimage 
>> attack might eventually be found agains the SHA2 family of hash algorithms. 
>> 
>> While SHA512 may have double the number of bytes it may not help much 
>> against a SHA2 preimage attack,. (Some papers  suggest that the double word 
>> size of SHA512 it may be more vulnerable than SHA256 to some attacks)
> 
> This is really something where the input of a cryptographer would be welcome. 
> As far as I am aware, the collision resistance of SHA-256 is still considered 
> at around the 128-bit level, while it is considered at around the 256-bit 
> level for SHA-512. Absent a total break of SHA2, it is likely that SHA-512 
> will remain at a higher security level than SHA-256 even if both are weakened 
> by cryptanalytic advances. They are based on the same algorithm, with 
> different parameters and word/block sizes.
> 
SHA512 uses double words and more rounds, true.  It also has more rounds broken 
by known attacks than SHA256 https://en.wikipedia.org/wiki/SHA-2 
.. So it is slightly more complex than 
doubling the output size doubles the strength.

>> 
>> It is currently believed that SHA256 has 256 bits of second preimage 
>> strength.   That could always turn out to be wrong as SHA2 has some 
>> similarities to SHA1, and yes post quantum that is reduced to 128bits. 
>> 
>> To have a safe future option we would probably want to go with SHA3-512.   
>> However I don’t see that getting much traction in the near term.  
> 
> SHA3 is also slower than SHA2 in software.
Yes roughly half the speed in software but generally faster in hardware.  

I am not necessarily arguing for SHA3, rather I think this issue is larger than 
this spec and selecting alternate hashing algorithms for security should be 
separate from this spec.

I am for agility, but I don’t want to accidentally have people doing something 
that is just theatre.

Rotating certificates, and having the lifetime of a certificates validity is as 
useful as doubling the hash size. 

I don’t think the confirmation hash length is the weakest link.

Joh

[OAUTH-WG] reference for invalid point attack [-jwt-bcp] ?

2018-04-30 Thread =JeffH

In search of CurveSwap:
Measuring elliptic curve implementations in the wild
Luke Valenta, Nick Sullivan, Antonio Sanso, Nadia Heninger
https://eprint.iacr.org/2018/298.pdf   (see section 7.1)

...is perhaps a suitable reference for section 3.4 of -jwt-bcp ?

https://tools.ietf.org/html/draft-ietf-oauth-jwt-bcp-01#section-3.4


HTH,
=JeffH

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread Brian Campbell
On Mon, Apr 30, 2018 at 9:57 AM, Neil Madden 
wrote:

>
> > On 30 Apr 2018, at 15:07, John Bradley  wrote:
>
> > My concern is that people will see a bigger number and decide it is
> better if we define it in the spec.
> > We may be getting people to do additional work and increasing token size
> without a good reason by putting it in the spec directly.
>
> I’m not sure why this is a concern. As previously pointed out, SHA-512 is
> often *faster* than SHA-256, and an extra 32 bytes doesn’t seem worth
> worrying about.
>

Seems like maybe it's worth noting that with JWT, where size can be a
legitimate constraint, those extra bytes end up being base64 encoded
twice.

-- 
_CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you._
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread John Bradley
Yes that is an issue.  

I think one of the things that kicked this off was a question of will this make 
it pointless for people to use algs such as AES GCM256 when it is perceived 
that our choice of hash somehow limits overall security to 128bits.

Let me take another run at this.

Things like block cyphers need to have long term secrecy.  An attacker may 
still get value from decrypting something years down the road.   

Things like signatures typically need to have some non repudiation property 
that lasts the useful lifetime of the document. That can be years or minutes 
depending on the document. 

In our case we are providing out of band integrity protection for the cert.  We 
could include the cert directly but it is allready being sent as part of TLS.  

In general the lifetime of the key pair used for access tokens will be less 
than the lifetime of the certificate, so it is hard to argue that we need 
stronger security than the cert itself.

We have a way to rotate keys/certs daily if desired with JWKS and it can 
support self signed certificates.  The security of this is still limited by the 
security of the TLS cert for the JWKS endpoint, but that is relatively easy to 
update if there is a need, and alternate certificate chains become available 
with security better than SHA256. 

However currently most if not all CAB forum roots are using SHA256 hashes with 
RSA2048 keys  (some like RSA still have roots using RSA 1028bit keys) 

I am normally the paranoid one in the crowd, but I would rather pick off some 
of the other issues that are more likely to go wrong first.  

We can point out extensibility for future use, but I am not buying us defining 
a new thumbprint when the one we have is as strong or stronger than the other 
parts of the trust chain.

I can see people choosing to use SHA512 having larger messages more processing 
as a way to avoid certificate rollover and that would be a bad tradeoff.

John B.



> On Apr 30, 2018, at 6:19 PM, Brian Campbell  
> wrote:
> 
> 
> 
> On Mon, Apr 30, 2018 at 9:57 AM, Neil Madden  > wrote:
> 
> > On 30 Apr 2018, at 15:07, John Bradley  > > wrote:
> 
> > My concern is that people will see a bigger number and decide it is better 
> > if we define it in the spec.  
> > We may be getting people to do additional work and increasing token size 
> > without a good reason by putting it in the spec directly.
> 
> I’m not sure why this is a concern. As previously pointed out, SHA-512 is 
> often *faster* than SHA-256, and an extra 32 bytes doesn’t seem worth 
> worrying about.
> 
> Seems like maybe it's worth noting that with JWT, where size can be a 
> legitimate constraint, those extra bytes end up being base64 encoded twice.  
> 
> 
> 
> CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
> material for the sole use of the intended recipient(s). Any review, use, 
> distribution or disclosure by others is strictly prohibited.  If you have 
> received this communication in error, please notify the sender immediately by 
> e-mail and delete the message and any file attachments from your computer. 
> Thank you.

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread Neil Madden
Responses inline again.

On Mon, 30 Apr 2018 at 19:44, John Bradley  wrote:

> Inline.
>
>
> On Apr 30, 2018, at 12:57 PM, Neil Madden 
> wrote:
>
> Hi John,
>
> On 30 Apr 2018, at 15:07, John Bradley  wrote:
>
> I lean towards letting new certificate thumbprints be defined someplace
> else.
>
> With SHA256, it is really second preimage resistance that we care about
> for a certificate thumbprint, rather than simple collision resistance.
>
>
> That’s not true if you consider a malicious client. If I can find any pair
> of certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can
> present c1 to the AS when I request an access token and later present c2 to
> the protected resource when I use it. I don’t know if there is an actual
> practical attack based on this, but a successful attack would violate the
> security goal implied by the draft: that that requests made to the
> protected resource "MUST be made […] using the same certificate that was
> used for mutual TLS at the token endpoint.”
>
> NB: this is obviously easier if the client gets to choose its own
> client_id, as it can find the colliding certificates and then sign up with
> whatever subject ended up in c1.
>
>
> Both C1 and C2 need to be valid certificates, so not just any collision
> will do.
>

That doesn’t help much. There’s still enough you can vary in a certificate
to generate collisions.

If the client produces C1 and C2 and has the private keys for them, I have
> a hard time seeing what advantage it could get by having colliding
> certificate hashes.
>

Me too. But if the security goal is proof of possession, then this attack
(assuming practical collisions) would break that goal.


> If the AS is trusting a CA, the attacker producing a certificate that
> matches the hash of another certificate so that it seems like the fake
> certificate was issued by the CA, is an attack that worked on MD5 given
> some predictability.  That is why we now have entropy requirements for
> certificate serial numbers, that reduce known prefix attacks.
>

The draft allows for self-signed certificates.

Second-preimage Resistance is being computationaly infusible to find a
> second preimage that has the same output as the first preimage.   The
> second preimage strength for SHA256 is 201-256bits and collision resistance
> strength is 128 bits.  See Appendix A of
> https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf
>  if
> you want to understand the relationship between message length and second
> Preimage resistance.
>
> RFC 4270 is old but still has some relevant info.
> https://tools.ietf.org/html/rfc4270
>
> Think of the confirmation method as the out of band integrity check for
> the certificate that is presented in the TLS session.
>

This is all largely irrelevant.

MD5 failed quite badly with chosen prefix collision attacks against
> certificates (Thanks to some X.509 extensions).
> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack (
> http://shattered.io)
>
> The reason NIST pushed for development of SHA3 was concern that a preimage
> attack might eventually be found agains the SHA2 family of hash algorithms.
>
> While SHA512 may have double the number of bytes it may not help much
> against a SHA2 preimage attack,. (Some papers  suggest that the double word
> size of SHA512 it may be more vulnerable than SHA256 to some attacks)
>
>
> This is really something where the input of a cryptographer would be
> welcome. As far as I am aware, the collision resistance of SHA-256 is still
> considered at around the 128-bit level, while it is considered at around
> the 256-bit level for SHA-512. Absent a total break of SHA2, it is likely
> that SHA-512 will remain at a higher security level than SHA-256 even if
> both are weakened by cryptanalytic advances. They are based on the same
> algorithm, with different parameters and word/block sizes.
>
> SHA512 uses double words and more rounds, true.  It also has more rounds
> broken by known attacks than SHA256 https://en.wikipedia.org/wiki/SHA-2..
> So it is slightly more complex than doubling the output size doubles the
> strength.
>

SHA-512 also has more rounds (80) than SHA-256 (64), so still has more
rounds left to go...


>
> It is currently believed that SHA256 has 256 bits of second preimage
> strength.   That could always turn out to be wrong as SHA2 has some
> similarities to SHA1, and yes post quantum that is reduced to 128bits.
>
> To have a safe future option we would probably want to go with SHA3-512.
>   However I don’t see that getting much traction in the near term..
>
>
> SHA3 is also slower than SHA2 in software.
>
> Yes roughly half the speed in software but generally faster in hardware.
>
> I am not necessarily arguing for SHA3, rather I think this issue is larger
> than this spec and selecting alternate hashing algorithms for security
> should be separate from this spec.
>
> I am for agility, but I don’t want to accidentally have peop

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread John Bradley
We allow for new thumbprint algorithms to be defined and used with this spec.
I think that we all agree that is a good thing.

The question is if we should define them here or as part of JWT/CWT based on 
broader demand.

Including them in this document may be a distraction in my opinion.   There is 
no attack against SHA256 with a short duration token/key (days) that is better 
solved by using a long duration token/key (years) with a longer hash.

That said it woiulden't like me.  I just think it will distract people in the 
wrong direction.

John B.

> On Apr 30, 2018, at 7:23 PM, Neil Madden  wrote:
> 
> Responses inline again. 
> 
> On Mon, 30 Apr 2018 at 19:44, John Bradley  > wrote:
> Inline.
> 
> 
>> On Apr 30, 2018, at 12:57 PM, Neil Madden > > wrote:
>> 
>> Hi John,
>> 
>>> On 30 Apr 2018, at 15:07, John Bradley >> > wrote:
>>> 
>>> I lean towards letting new certificate thumbprints be defined someplace 
>>> else.
>>> 
>>> With SHA256, it is really second preimage resistance that we care about for 
>>> a certificate thumbprint, rather than simple collision resistance.  
>> 
>> That’s not true if you consider a malicious client. If I can find any pair 
>> of certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can 
>> present c1 to the AS when I request an access token and later present c2 to 
>> the protected resource when I use it. I don’t know if there is an actual 
>> practical attack based on this, but a successful attack would violate the 
>> security goal implied by the draft: that that requests made to the protected 
>> resource "MUST be made […] using the same certificate that was used for 
>> mutual TLS at the token endpoint.”
>> 
>> NB: this is obviously easier if the client gets to choose its own client_id, 
>> as it can find the colliding certificates and then sign up with whatever 
>> subject ended up in c1.
>> 
> 
> Both C1 and C2 need to be valid certificates, so not just any collision will 
> do.  
> 
> That doesn’t help much. There’s still enough you can vary in a certificate to 
> generate collisions. 
> 
> If the client produces C1 and C2 and has the private keys for them, I have a 
> hard time seeing what advantage it could get by having colliding certificate 
> hashes.
> 
> Me too. But if the security goal is proof of possession, then this attack 
> (assuming practical collisions) would break that goal. 
> 
> 
> If the AS is trusting a CA, the attacker producing a certificate that matches 
> the hash of another certificate so that it seems like the fake certificate 
> was issued by the CA, is an attack that worked on MD5 given some 
> predictability.  That is why we now have entropy requirements for certificate 
> serial numbers, that reduce known prefix attacks.
> 
> The draft allows for self-signed certificates. 
> 
> Second-preimage Resistance is being computationaly infusible to find a second 
> preimage that has the same output as the first preimage.   The second 
> preimage strength for SHA256 is 201-256bits and collision resistance strength 
> is 128 bits.  See Appendix A of 
> https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf
>  
> 
>  if you want to understand the relationship between message length and second 
> Preimage resistance.
> 
> RFC 4270 is old but still has some relevant info. 
> https://tools.ietf.org/html/rfc4270 
> 
> Think of the confirmation method as the out of band integrity check for the 
> certificate that is presented in the TLS session.
> 
> This is all largely irrelevant. 
> 
>>> MD5 failed quite badly with chosen prefix collision attacks against 
>>> certificates (Thanks to some X.509 extensions).
>>> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
>>> (http://shattered.io )
>>> 
>>> The reason NIST pushed for development of SHA3 was concern that a preimage 
>>> attack might eventually be found agains the SHA2 family of hash algorithms. 
>>> 
>>> While SHA512 may have double the number of bytes it may not help much 
>>> against a SHA2 preimage attack,. (Some papers  suggest that the double word 
>>> size of SHA512 it may be more vulnerable than SHA256 to some attacks)
>> 
>> This is really something where the input of a cryptographer would be 
>> welcome. As far as I am aware, the collision resistance of SHA-256 is still 
>> considered at around the 128-bit level, while it is considered at around the 
>> 256-bit level for SHA-512. Absent a total break of SHA2, it is likely that 
>> SHA-512 will remain at a higher security level than SHA-256 even if both are 
>> weakened by cryptanalytic advances. They are based on the same algorithm, 
>> with different parameters and word/block sizes.
>> 
> 
> SHA512 uses double words and more rounds, true.  It also has mor