Hi, 

I'm failing to understand why binding the proof to the access token ensures 
freshness of the proof. I would rather think if the client is forced to create 
proofs with a reasonable short lifetime, chances for replay could be reduced. 

Beside that as far as I remember the primary replay counter measure is the 
inclusion of the endpoint URL and HTTP method in the proof, since it reduces 
the attack surface to a particular URL. So in the context of freshness, we are 
talking about using the same proof with the same URL again. 

best regards,
Torsten. 

> Am 03.12.2020 um 10:20 schrieb Filip Skokan <panva...@gmail.com>:
> 
> Hi Brian, everyone,
> 
> While the attack vector allows direct use, there is the option where a 
> smarter attacker will not abuse the gained artifacts straight away. Think 
> public client browser scenario with the non-extractable private key stored in 
> IndexedDB (the only place to persist them really), they wouldn't use the 
> tokens but instead, exfiltrate them, together with a bunch of pre-generated 
> DPoP proofs. They'll get the refresh token and a bunch of DPoP proofs for 
> both the RS and AS. With those they'll be able to get a fresh AT and use it 
> with pre-generated Proofs after the end-user leaves the site. No available 
> protection (e.g. RT already rotated) will be able to kick in until the 
> end-user opens the page again.
> 
> OTOH with a hash of the AT in the Proof only direct use remains.
> 
> If what I describe above is something we don't want to deal with because of 
> direct use already allowing access to protected resources, it's sufficiently 
> okay as is (option #1). However, if this scenario, one allowing prolonged 
> access to protected resources, is not acceptable, it's option #2.
> 
> Ad #2a vs #2b vs #2c. My pre-emptive answer is #2a, simply because we already 
> have the tools needed to generate and validate these hashes. But further 
> thinking about it, it would feel awkward if this JWS algorithm driven at_hash 
> digest selection wouldn't get stretched to the confirmations, when this are 
> placed in a JWT access token, cool - we can do that, but when these are put 
> in a basic token introspection response it's unfortunately not an option. So, 
> #2b (just use sha-256 just like the confirmations do).
> 
> Best,
> Filip
> 
> 
> On Wed, 2 Dec 2020 at 21:50, Brian Campbell 
> <bcampbell=40pingidentity....@dmarc.ietf.org> wrote:
> There were a few items discussed somewhat during the recent interim that I 
> committed to bringing back to the list. The slide below (also available as 
> slide #17 from the interim presentation) is the first one of them, which is 
> difficult to summarize but kinda boils down to how much assurance there is 
> that the DPoP proof was 'freshly' created and that can dovetail into the 
> question of whether the token is covered by the signature of the proof. 
> There are many directions a "resolution" here could go but my sense of the 
> room during the meeting was that the contending options were:
>       •  It's sufficiently okay as it is
>       •  Include a hash of the access token in the DPoP proof (when an access 
> token is present)
> 
> Going with #2 would mean the draft would also have to define how the hashing 
> is done and deal with or at least speak to algorithm agility. Options (that I 
> can think of) include:
>       • 2a) Use the at_hash claim defined in OIDC core 
> https://openid.net/specs/openid-connect-core-1_0.html#CodeIDToken. Using 
> something that already exists is appealing. But its hash alg selection 
> routine can be a bit of a pain. And the algorithm agility based on the 
> signature that it's supposed to provide hasn't worked out as well as hoped in 
> practice for "new" JWS signatures 
> https://bitbucket.org/openid/connect/issues/1125/_hash-algorithm-for-eddsa-id-tokens
>       • 2b) Define a new claim ("ah", "ath", "atd", "ad" or something like 
> that maybe) and just use SHA-256. Explain why it's good enough for now and 
> the foreseeable future. Also include some text about introducing a new claim 
> in the future if/when SHA-256 proves to be insufficient. Note that this is 
> effectively the same as how the confirmation claim value is currently defined 
> in this document and in RFC8705.
>       • 2c) Define a new claim with its own hash algorithm agility scheme 
> (likely similar to how the Digest header value or Subresource Integrity 
> string is done).
> 
> I'm requesting that interested WG participants indicate their preference for 
> #1 or #2. And among a, b, and c, if the latter. 
> 
> I also acknowledge that an ECDH approach could/would ameliorate the issues in 
> a fundamentally different way. But that would be a distinct protocol. If 
> there's interest in pursuing the ECDH idea, I'm certainly open to it and even 
> willing to work on it. But as a separate effort and not at the expense of 
> derailing DPoP in its general current form. 
> <Slide17.jpeg>
> 
> 
> CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
> material for the sole use of the intended recipient(s). Any review, use, 
> distribution or disclosure by others is strictly prohibited.  If you have 
> received this communication in error, please notify the sender immediately by 
> e-mail and delete the message and any file attachments from your computer. 
> Thank you._______________________________________________
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
> _______________________________________________
> OAuth mailing list
> OAuth@ietf.org
> https://www.google.com/url?q=https://www.ietf.org/mailman/listinfo/oauth&source=gmail-imap&ust=1607592086000000&usg=AOvVaw3hGaihxAdyXVvzFnVTpc6N

_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to