JOSE and many other specs have allowed algorithms to be specified at multiple
security levels: a baseline 128-bit level, and then usually 192- and 256-bit
levels too. It seems odd that a draft that is ostensibly for high security
assurance environments would choose to only specify the lowest acceptable
security level, especially when the 256-bit level has essentially negligible
overhead. (OK, ~60 bytes additional overhead in a JWT - I’d be surprised if
that was a deal breaker though).
Still, if the consensus of the WG is that this is not worth it, then I don’t
want to delay the draft any further. I can always submit a 2 line RFC in future
to add a SHA-512 confirmation method.
— Neil
> On 30 Apr 2018, at 23:58, John Bradley wrote:
>
> We allow for new thumbprint algorithms to be defined and used with this spec.
> I think that we all agree that is a good thing.
>
> The question is if we should define them here or as part of JWT/CWT based on
> broader demand.
>
> Including them in this document may be a distraction in my opinion. There
> is no attack against SHA256 with a short duration token/key (days) that is
> better solved by using a long duration token/key (years) with a longer hash.
>
> That said it woiulden't like me. I just think it will distract people in the
> wrong direction.
>
> John B.
>
>> On Apr 30, 2018, at 7:23 PM, Neil Madden wrote:
>>
>> Responses inline again.
>>
>> On Mon, 30 Apr 2018 at 19:44, John Bradley wrote:
>> Inline.
>>
>>
>>> On Apr 30, 2018, at 12:57 PM, Neil Madden wrote:
>>>
>>> Hi John,
>>>
On 30 Apr 2018, at 15:07, John Bradley wrote:
I lean towards letting new certificate thumbprints be defined someplace
else.
With SHA256, it is really second preimage resistance that we care about
for a certificate thumbprint, rather than simple collision resistance.
>>>
>>> That’s not true if you consider a malicious client. If I can find any pair
>>> of certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can
>>> present c1 to the AS when I request an access token and later present c2 to
>>> the protected resource when I use it. I don’t know if there is an actual
>>> practical attack based on this, but a successful attack would violate the
>>> security goal implied by the draft: that that requests made to the
>>> protected resource "MUST be made […] using the same certificate that was
>>> used for mutual TLS at the token endpoint.”
>>>
>>> NB: this is obviously easier if the client gets to choose its own
>>> client_id, as it can find the colliding certificates and then sign up with
>>> whatever subject ended up in c1.
>>>
>>
>> Both C1 and C2 need to be valid certificates, so not just any collision will
>> do.
>>
>> That doesn’t help much. There’s still enough you can vary in a certificate
>> to generate collisions.
>>
>> If the client produces C1 and C2 and has the private keys for them, I have a
>> hard time seeing what advantage it could get by having colliding certificate
>> hashes.
>>
>> Me too. But if the security goal is proof of possession, then this attack
>> (assuming practical collisions) would break that goal.
>>
>>
>> If the AS is trusting a CA, the attacker producing a certificate that
>> matches the hash of another certificate so that it seems like the fake
>> certificate was issued by the CA, is an attack that worked on MD5 given some
>> predictability. That is why we now have entropy requirements for
>> certificate serial numbers, that reduce known prefix attacks.
>>
>> The draft allows for self-signed certificates.
>>
>> Second-preimage Resistance is being computationaly infusible to find a
>> second preimage that has the same output as the first preimage. The second
>> preimage strength for SHA256 is 201-256bits and collision resistance
>> strength is 128 bits. See Appendix A of
>> https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf
>> if you want to understand the relationship between message length and
>> second Preimage resistance.
>>
>> RFC 4270 is old but still has some relevant info.
>> https://tools.ietf.org/html/rfc4270
>>
>> Think of the confirmation method as the out of band integrity check for the
>> certificate that is presented in the TLS session.
>>
>> This is all largely irrelevant.
>>
MD5 failed quite badly with chosen prefix collision attacks against
certificates (Thanks to some X.509 extensions).
SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack
(http://shattered.io)
The reason NIST pushed for development of SHA3 was concern that a preimage
attack might eventually be found agains the SHA2 family of hash
algorithms.
While SHA512 may have double the number of bytes it may not help much
against a SHA2 preimage attack,. (Some papers suggest that the double
word size of SHA512 it may be more vulnerable