Hi Neil, thanks for your comprehensive answers. Please find my comments inline.
best regards, Torsten. > Am 12.12.2020 um 21:11 schrieb Neil Madden <neil.mad...@forgerock.com>: > > > Good questions! Answers inline: > >>> On 12 Dec 2020, at 10:07, Torsten Lodderstedt <tors...@lodderstedt.net> >>> wrote: >>> >> >> Thanks for sharing, Neil! >> >> I‘ve got some questions: >> Note: I assume the tokens you are referring in your article are OAuth access >> tokens. > > No, probably not. Just auth tokens more generically. > >> - carrying tokens in URLs wie considered bad practice by the Security BCP >> and OAuth 2.1 due to leakage via referrer headers and so on. Why isn’t this >> an issue with your approach? > > This is generally safe advice, but it is often over-cautious for three > reasons: > > 1. Referer headers (and window.referrer) apply when embedding/linking > resources in HTML. But when we’re talking about browser-based apps (eg SPAs), > that usually means JavaScript calling some backend API that returns JSON or > some other data format. These data formats don’t have links or embedded > resources (as far as the browser is concerned), so they don’t leak Referer > headers in the same way. When the app loads a resource from a URI in a JSON > response the Referer header will contain the URI of the app itself (most > likely a generic HTML template page), not the capability URI from which the > JSON was loaded. Similar arguments apply to browser history and other typical > ways that URIs leak. > > 2. You can now use the Referrer-Policy header [1] and rel=“noopener > noreferrer” to opt out of this leakage, and browsers are moving to doing this > by default for cross-origin requests/embeds. (This is already enabled by > default in Safari). > > 3. When you do want to use capability URIs for top-level navigation, there > are places in the URI you can put a token that aren’t ever included in > Referer headers or window.referrer or ever sent to the server at all - such > as the fragment. JavaScript can then extract the token from the fragment (and > then wipe it) and send it to the server in an Authorization header or > whatever. See [2] for more details and alternatives. > > [1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy > [2]: > https://neilmadden.blog/2019/01/16/can-you-ever-safely-include-credentials-in-a-url/ > >> - generating (self contained) or using (handles) per URL access tokens might >> be rather expensive. Can you sketch out how you wanna cope with that >> challenge? > > A decent HMAC implementation takes about 1-2 microseconds for typical size of > token we’re talking about. The generation of a self contained access token typically requires querying claim values from at least a single data source. That might take more time. For handle based tokens/token introspection, one needs to add the time it takes to obtain the token data, which requires a HTTPS communication. That could be even more time consuming. > >> - per URL Access tokens are a very consequent Form or audience restriction. >> How do you wanna signal the audience to the AS? > > As I said, this isn’t OAuth, but for example you can already do this with the > macaroon access tokens in ForgeRock AM 7.0 - issue a single access token and > then make copies with specific audience restrictions added as caveats, as > discussed in [3]. Such audience restrictions are then returned in the token > introspection response and the RS can enforce them. > > My comment in the article about ideas for future OAuth is really just that > the token endpoint should be able to issue multiple fine-grained access > tokens in one go, each associated with a particular endpoint (or endpoints). > You could either return these as separate items like: > > “access_tokens”: [ > { “token”: “abc...”, > “aud”: “https://api.example.com/foo” }, > { “token”: “def...”, > “aud”: “https://api.example.com/bar” } > ] I like the idea (and have liked it for a long time https://mailarchive.ietf.org/arch/msg/oauth/JcKGhoKy2S_2gAQ2ilMxCPWbgPw/). resource indicators or authorization_details (with locations) could basically be used for that purpose but OAuth2 lacks multiple tokens support in the token endpoint. > > Or just go ahead and combine those into capability URIs. (I think I already > mentioned this a long time ago when GNAP was first being discussed). > > Speaking even more wishfully, what I would really love to see is a new URL > scheme for these, something like: > > bearer://<token>@api.example.com/foo > > Which is equivalent to a HTTPS link, but the browser knows about this format > and when clicking on/accessing such a URI it sends the token as an > Authorization: Bearer header automatically. Ideally the browser would also > not allow the token to be accessible from the DOM. Interesting. That would allow to elevate browser support to the level of BASIC. > > Even without browser support I think such a URI scheme would be useful to > allow GitHub and others to more easily recognise capability URIs checked into > public git repos and perhaps provide a way to automatically revoke them > (.well-known/token-revocation perhaps). > > [3]: > https://neilmadden.blog/2020/07/29/least-privilege-with-less-effort-macaroon-access-tokens-in-am-7-0/ > > — Neil > >> >> best regards, >> Torsten. >> >>>> Am 12.12.2020 um 08:26 schrieb Neil Madden <neil.mad...@forgerock.com>: >>>> >>> >>> Not directly related to DPoP or OAuth, but I wrote some notes to help >>> recovering XSS Nihilists: >>> https://neilmadden.blog/2020/12/10/xss-doesnt-have-to-be-game-over/ >>> >>> — Neil >>> >>>>> On 12 Dec 2020, at 00:02, Brian Campbell >>>>> <bcampbell=40pingidentity....@dmarc.ietf.org> wrote: >>>>> >>>> >>>> I think that puts Jim in the XSS Nihilism camp :) >>>> >>>> Implicit type flows are being deprecated/discouraged. But keeping tokens >>>> out of browsers doesn't seem likely. There is some menton of CSP in >>>> https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps-07#section-9.7 >>>> >>>> >>>>> On Wed, Dec 9, 2020 at 4:10 PM Jim Manico <j...@manicode.com> wrote: >>>>> The basic theme from the web attacker community is: >>>>> >>>>> 1) XSS is a game over event to web clients. XSS can steal or abuse >>>>> (request forgery) tokens, and more. >>>>> >>>>> 2) Even if you prevent stolen tokens from being used outside of a web >>>>> client, XSS still allows the attacker to force a user to make any request >>>>> in a fraudulent way, abusing browser based tokens as a form of request >>>>> forgery. >>>>> >>>>> 3) There are advanced measures to stop a token from being stolen from a >>>>> web client, like a HTTPonly cookies and to a lesser degree, JS Closures >>>>> and Webworkers. >>>>> >>>>> 4) However, these measures to protect cookies are mostly moot. Attackers >>>>> can just force clients to make fraudulent requests. >>>>> >>>>> 5) Many recommend the BFF pattern to hide tokens on the back end, but >>>>> still, request forgery via XSS allows all kinds of abuse. >>>>> >>>>> XSS is game over no matter how you slice it. >>>>> >>>>> Crypto solutions do not help. Perhaps the world of OAuth can start >>>>> suggesting that web clients use CSP 3.0 in specific ways, if you still >>>>> plan to support Implicit type flows or tokens in browsers? >>>>> >>>>> Respectfully, >>>>> >>>>> - Jim >>>>> >>>>> >>>>> >>>>> On 12/9/20 12:57 PM, Brian Campbell wrote: >>>>>> Thanks Philippe, I very much concur with your line of reasoning and the >>>>>> important considerations. The scenario I was thinking of is: browser >>>>>> based client where XSS is used to exfiltrate the refresh token along >>>>>> with pre-computed proofs that would allow for the RT to be exchanged for >>>>>> new access tokens and also pre-computed proofs that would work with >>>>>> those access tokens for resource access. With the pre-computed proofs >>>>>> that would allow prolonged (as long as the RT is valid) access to >>>>>> protected resources even when the victim is offline. Is that a concrete >>>>>> attack scenario? I mean, kind of. It's pretty convoluted/complex. And >>>>>> while an access token hash would reign it in somewhat (ATs obtained from >>>>>> the stolen RT wouldn't be usable) it's hard to say if the cost is worth >>>>>> the benefit. >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck >>>>>> <phili...@pragmaticwebsecurity.com> wrote: >>>>>>> Yeah, browser-based apps are pure fun, aren’t they? :) >>>>>>> >>>>>>> The reason I covered a couple of (pessimistic) XSS scenarios is that >>>>>>> the discussion started with an assumption that the attacker already >>>>>>> successfully exploited an XSS vulnerability. I pointed out how, at that >>>>>>> point, finetuning DPoP proof contents will have little to no effect to >>>>>>> stop an attack. I believe it is important to make this very clear, to >>>>>>> avoid people turning to DPoP as a security mechanism for browser-based >>>>>>> applications. >>>>>>> >>>>>>> >>>>>>> Specifically to your question on including the hash in the proof, I >>>>>>> think these considerations are important: >>>>>>> >>>>>>> 1. Does the inclusion of the AT hash stop a concrete attack scenario? >>>>>>> 2. Is the “cost” (implementation, getting it right, …) worth the >>>>>>> benefits? >>>>>>> >>>>>>> >>>>>>> Here’s my view on these considerations (specifically for browser-based >>>>>>> apps, not for other types of applications): >>>>>>> >>>>>>> 1. The proof precomputation attack is already quite complex, and short >>>>>>> access token lifetimes already reduce the window of attack. If the >>>>>>> attacker can steal a future AT, they could also precompute new proofs >>>>>>> then. >>>>>>> 2. For browser-based apps, it seems that doing this complicates the >>>>>>> implementation, without adding much benefit. Of course, libraries could >>>>>>> handle this, which significantly reduces the cost. >>>>>>> >>>>>>> >>>>>>> Note that these comments are specifically to complicating the spec and >>>>>>> implementation. DPoP’s capabilities of using sender-constrained access >>>>>>> tokens are still useful to counter various other scenarios (e.g., >>>>>>> middleboxes or APIs abusing access tokens). If other applications would >>>>>>> significantly benefit from having the hash in the proof, I’m all for it. >>>>>>> >>>>>>> On a final note, I would be happy to help clear up the details on >>>>>>> web-based threats and defenses if necessary. >>>>>>> >>>>>>> — >>>>>>> Pragmatic Web Security >>>>>>> Security for developers >>>>>>> https://pragmaticwebsecurity.com/ >>>>>>> >>>>>>> >>>>>>>> On 8 Dec 2020, at 22:47, Brian Campbell <bcampb...@pingidentity.com> >>>>>>>> wrote: >>>>>>>> >>>>>>>> Danial recently added some text to the working copy of the draft with >>>>>>>> https://github.com/danielfett/draft-dpop/commit/f4b42058 that I think >>>>>>>> aims to better convey the "nutshell: XSS = Game over" sentiment and >>>>>>>> maybe dissuade folks from looking to DPoP as a cure-all for browser >>>>>>>> based applications. Admittedly a lot of the initial impetus behind >>>>>>>> producing the draft in the first place was born out of discussions >>>>>>>> around browser based apps. But it's neither specific to browser based >>>>>>>> apps nor a panacea for them. I hope the language in the document and >>>>>>>> how it's recently been presented is reflective of that reality. >>>>>>>> >>>>>>>> The more specific discussions/recommendations around in-browser apps >>>>>>>> are valuable (if somewhat over my head) but might be more appropriate >>>>>>>> in the OAuth 2.0 for Browser-Based Apps draft. >>>>>>>> >>>>>>>> With respect to the contents of the DPoP draft, I am still keen to try >>>>>>>> and flush out some consensus around the question posed in the start of >>>>>>>> this thread, which is effectively whether or not to include a hash of >>>>>>>> the access token in the proof. Acknowledging that "XSS = Game over" >>>>>>>> does sort of evoke a tendency to not even bother with such incremental >>>>>>>> protections (what I've tried to humorously coin as "XSS Nihilism" with >>>>>>>> no success). And as such, I do think that leaving it how it is (no AT >>>>>>>> hash in the proof) is not unreasonable. But, as Filip previously >>>>>>>> articulated, including the AT hash in the proof would prevent >>>>>>>> potentially prolonged access to protected resources even when the >>>>>>>> victim is offline. And that seems maybe worthwhile to have in the >>>>>>>> protocol, given that it's not a huge change to the spec. But it's a >>>>>>>> trade-off either way and I'm personally on the fence about it. >>>>>>>> >>>>>>>> Including an RT hash in the proof seems more niche. Best I can tell, >>>>>>>> it would guard against prolonged offline access to protected resources >>>>>>>> when access tokens are bearer and the RT was DPoP-bound and also gets >>>>>>>> rotated. The trade-off there seems less worth it (I think an RT hash >>>>>>>> would be more awkward in the protocol too). >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Dec 4, 2020 at 5:40 AM Philippe De Ryck >>>>>>>> <phili...@pragmaticwebsecurity.com> wrote: >>>>>>>>> >>>>>>>>>> The suggestion to use a web worker to ensure that proofs cannot be >>>>>>>>>> pre-computed is a good one I think. (You could also use a sandboxed >>>>>>>>>> iframe for a separate sub/sibling-domain - dpop.example.com). >>>>>>>>> >>>>>>>>> An iframe with a different origin would also work (not really >>>>>>>>> sandboxing, as that implies the use of the sandbox attribute to >>>>>>>>> enforce behavioral restrictions). The downside of an iframe is the >>>>>>>>> need to host additional HTML, vs a script file for the worker, but >>>>>>>>> the effect is indeed the same. >>>>>>>>> >>>>>>>>>> For scenario 4, I think this only works if the attacker can >>>>>>>>>> trick/spoof the AS into using their redirect_uri? Otherwise the AC >>>>>>>>>> will go to the legitimate app which will reject it due to mismatched >>>>>>>>>> state/PKCE. Or are you thinking of XSS on the redirect_uri itself? I >>>>>>>>>> think probably a good practice is that the target of a redirect_uri >>>>>>>>>> should be a very minimal and locked down page to avoid this kind of >>>>>>>>>> possibility. (Again, using a separate sub-domain to handle tokens >>>>>>>>>> and DPoP seems like a good idea). >>>>>>>>> >>>>>>>>> My original thought was to use a silent flow with Web Messaging. The >>>>>>>>> scenario would go as follows: >>>>>>>>> >>>>>>>>> 1. Setup a Web Messaging listener to receive the incoming code >>>>>>>>> 2. Create a hidden iframe with the DOM APIs >>>>>>>>> 3. Create an authorization request such as >>>>>>>>> “/authorize?response_type=code&client_id=...&redirect_uri=https%3A%2F%example.com&state=...&code_challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk&code_challenge_method=S256&prompt=none&response_mode=web_message” >>>>>>>>> 4. Load this URL in the iframe, and wait for the result >>>>>>>>> 5. Retrieve code in the listener, and use PKCE (+ DPoP if needed) to >>>>>>>>> exchange it for tokens >>>>>>>>> >>>>>>>>> This puts the attacker in full control over every aspect of the flow, >>>>>>>>> so no need to manipulate any of the parameters. >>>>>>>>> >>>>>>>>> >>>>>>>>> After your comment, I also believe an attacker can run the same >>>>>>>>> scenario without the “response_mode=web_message”. This would go as >>>>>>>>> follows: >>>>>>>>> >>>>>>>>> 1. Create a hidden iframe with the DOM APIs >>>>>>>>> 2. Setup polling to read the URL (this will be possible for >>>>>>>>> same-origin pages, not for cross-origin pages) >>>>>>>>> 3. Create an authorization request such as >>>>>>>>> “/authorize?response_type=code&client_id=...&redirect_uri=https%3A%2F%example.com&state=...&code_challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk&code_challenge_method=S256” >>>>>>>>> 4. Load this URL in the iframe, and keep polling >>>>>>>>> 5. Detect the redirect back to the application with the code in the >>>>>>>>> URL, retrieve code, and use PKCE (+ DPoP if needed) to exchange it >>>>>>>>> for tokens >>>>>>>>> >>>>>>>>> In step 5, the application is likely to also try to exchange the >>>>>>>>> code. This will fail due to a mismatching PKCE verifier. While noisy, >>>>>>>>> I don’t think it affects the scenario. >>>>>>>>> >>>>>>>>> >>>>>>>>>> IMO, the online attack scenario (i.e., proxying malicious requests >>>>>>>>>> through the victim’s browser) is quite appealing to an attacker, >>>>>>>>>> despite the apparent inconvenience: >>>>>>>>>> >>>>>>>>>> - the victim’s browser may be inside a corporate firewall or VPN, >>>>>>>>>> allowing the attacker to effectively bypass these restrictions >>>>>>>>>> - the attacker’s traffic is mixed in with the user’s own requests, >>>>>>>>>> making them harder to distinguish or to block >>>>>>>>>> >>>>>>>>>> Overall, DPoP can only protect against XSS to the same level as >>>>>>>>>> HttpOnly cookies. This is not nothing, but it means it only prevents >>>>>>>>>> relatively naive attacks. Given the association of public key >>>>>>>>>> signatures with strong authentication, people may have overinflated >>>>>>>>>> expectations if DPoP is pitched as an XSS defence. >>>>>>>>> >>>>>>>>> Yes, in the cookie world this is known as “Session Riding”. Having >>>>>>>>> the worker for token isolation would make it possible to enforce a >>>>>>>>> coarse-grained policy on outgoing requests to prevent total abuse of >>>>>>>>> the AT. >>>>>>>>> >>>>>>>>> My main concern here is the effort of doing DPoP in a browser versus >>>>>>>>> the limited gains. It may also give a false sense of security. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> With all this said, I believe that the AS can lock down its >>>>>>>>> configuration to reduce these attack vectors. A few initial ideas: >>>>>>>>> >>>>>>>>> 1. Disable silent flows for SPAs using RT rotation >>>>>>>>> 2. Use the sec-fetch headers to detect and reject non-silent >>>>>>>>> iframe-based flows >>>>>>>>> >>>>>>>>> For example, an OAuth 2.0 flow in an iframe in Brave/Chrome carries >>>>>>>>> these headers: >>>>>>>>> sec-fetch-dest: iframe >>>>>>>>> sec-fetch-mode: navigate >>>>>>>>> sec-fetch-site: cross-site >>>>>>>>> sec-fetch-user: ?1 >>>>>>>>> >>>>>>>>> >>>>>>>>> Philippe >>>>>>>>> >>>>>>>> >>>>>>>> CONFIDENTIALITY NOTICE: This email may contain confidential and >>>>>>>> privileged material for the sole use of the intended recipient(s). Any >>>>>>>> review, use, distribution or disclosure by others is strictly >>>>>>>> prohibited. If you have received this communication in error, please >>>>>>>> notify the sender immediately by e-mail and delete the message and any >>>>>>>> file attachments from your computer. Thank you. >>>>>>> >>>>>> >>>>>> CONFIDENTIALITY NOTICE: This email may contain confidential and >>>>>> privileged material for the sole use of the intended recipient(s). Any >>>>>> review, use, distribution or disclosure by others is strictly >>>>>> prohibited. If you have received this communication in error, please >>>>>> notify the sender immediately by e-mail and delete the message and any >>>>>> file attachments from your computer. Thank you. >>>>>> >>>>>> _______________________________________________ >>>>>> OAuth mailing list >>>>>> OAuth@ietf.org >>>>>> https://www.ietf.org/mailman/listinfo/oauth >>>>> -- >>>>> Jim Manico >>>>> Manicode Security >>>>> https://www.manicode.com >>>> >>>> CONFIDENTIALITY NOTICE: This email may contain confidential and privileged >>>> material for the sole use of the intended recipient(s). Any review, use, >>>> distribution or disclosure by others is strictly prohibited. If you have >>>> received this communication in error, please notify the sender immediately >>>> by e-mail and delete the message and any file attachments from your >>>> computer. Thank you._______________________________________________ >>>> OAuth mailing list >>>> OAuth@ietf.org >>>> https://www.ietf.org/mailman/listinfo/oauth >>> >>> ForgeRock values your Privacy_______________________________________________ >>> OAuth mailing list >>> OAuth@ietf.org >>> https://www.google.com/url?q=https://www.ietf.org/mailman/listinfo/oauth&source=gmail-imap&ust=1608362770000000&usg=AOvVaw31Ss8FnOZiHe2e0_3e_uNg > > ForgeRock values your Privacy
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ OAuth mailing list OAuth@ietf.org https://www.ietf.org/mailman/listinfo/oauth