Annabelle,

> Am 27.11.2019 um 02:46 schrieb Richard Backman, Annabelle 
> <richa...@amazon.com>:
> 
> Torsten,
> 
> I'm not tracking how cookies are relevant to the discussion.

I’m still trying to understand why you and others argue mTLS cannot be used in 
public cloud deployments (and thus focus on application level PoP).

Session cookies serve the same purpose in web apps as access tokens for APIs 
but there are much more web apps than APIs. I use the analogy to illustrate 
that either there are security issues with cloud deployments of web apps or the 
techniques used to secure web apps are ok for APIs as well.

Here are the two main arguments and my conclusions/questions:  

1) mTLS it’s not end 2 end: although that’s true from a connection perspective, 
there are solutions employed to secure the last hop(s) between TLS terminating 
proxy and service (private net, VPN, TLS). That works and is considered secure 
enough for (session) cookies, it should be the same for access tokens.

2) TLS terminating proxies do not forward cert data: if the service itself 
terminates TLS this is feasible, we do it for our public-cloud-hosted 
mTLS-protected APIs. If TLS termination is provided by a component run by the 
cloud provider, the question is: is this component able to forward the client 
certificate to the service? If not, web apps using certs for authentication 
cannot be supported straightway by the cloud provider. Any insights?

> I'm guessing that's because we're not on the same page regarding use cases, 
> so allow me to clearly state mine:

I think we are, we are just focusing on different ends of the TLS tunnel. My 
focus is on the service provider’s side, esp. public cloud hosting, whereas you 
are focusing on client side TLS terminating proxies.

> 
> The use case I am concerned with is requests between services where 
> end-to-end TLS cannot be guaranteed. For example, an enterprise service 
> running on-premise, communicating with a service in the cloud, where the 
> enterprise's outbound traffic is routed through a TLS Inspection (TLSI) 
> appliance. The TLSI appliance sits in the middle of the communication, 
> terminating the TLS session established by the on-premise service and 
> establishing a separate TLS connection with the cloud service.
> 
> In this kind of environment, there is no end-to-end TLS connection between 
> on-premise service and cloud service, and it is very unlikely that the TLSI 
> appliance is configurable enough to support TLS-based sender-constraint 
> mechanisms without significantly compromising on the scope of "sender" (e.g., 
> "this service at this enterprise" becomes "this enterprise”).

I’m not familiar with these kind of proxies, but happy to learn more and to 
discuss potential solutions.

Here are some questions:
- Have you seen this kind of proxies intercepting the connection from on-prem 
service deployments to service provider? I’m asking because I thought the main 
use case was to intercept employees PC internet traffic. 
- Are you saying this kind of proxy does not support mutual TLS at all? At 
least theoretically, the proxy could combine source and destination to select a 
cert/key pair to use for outbound TLS client authentication. 

> Even if it is possible, it is likely to require advanced configuration that 
> is non-trivial for administrators to deploy. It's no longer as simple as the 
> developer passing a self-signed certificate to the HTTP stack.

I agree. Cert binding is established in OAuth protocol messages, which would 
require the appliance to understand the protocol. On the other hand, I would 
expect these kind of proxy to understand a lot about the protocols running 
through it, otherwise they cannot fulfil their task of inspecting this traffic. 

best regards,
Torsten. 



> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/23/19, 9:50 AM, "Torsten Lodderstedt" <tors...@lodderstedt.net> wrote:
> 
> 
> 
>>>>>>>>> On 23. Nov 2019, at 00:34, Richard Backman, Annabelle 
>>>>>>>>> <richa...@amazon.com> wrote:
>>>>>>>>> how are cookies protected from leakage, replay, injection in a setup 
>>>>>>>>> like this?
>> They aren’t.
> 
> Thats very interesting when compared to what we are discussing with respect 
> to API security. 
> 
> It effectively means anyone able to capture a session cookie, e.g. between 
> TLS termination point and application, by way of an HTML injection, or any 
> other suitable attack is able to impersonate a legitimate user by injecting 
> the cookie(s) in an arbitrary user agent. The impact of such an attack might 
> be even worse than abusing an access token given the (typically) broad scope 
> of a session.
> 
> TLS-based methods for sender constrained access tokens, in contrast, prevent 
> this type of replay, even if the requests are protected between client and 
> TLS terminating proxy, only. Ensuring the authenticity of the client 
> certificate when forwarded from TLS terminating proxy to service, e.g. 
> through another authenticated TLS connection, will even prevent injection 
> within the data center/cloud environment. 
> 
> I come to the conclusion that we already have the mechanism at hand to 
> implement APIs with a considerable higher security level than what is 
> accepted today for web applications. So what problem do we want to solve?
> 
>> But my primary concern here isn't web browser traffic, it's calls from 
>> services/apps running inside a corporate network to services outside a 
>> corporate network (e.g., service-to-service API calls that pass through a 
>> corporate TLS gateway).
> 
> Can you please describe the challenges arising in these settings? I assume 
> those proxies won’t support CONNECT style pass through otherwise we wouldn’t 
> talk about them.
> 
>>> That’s a totally valid point. But again, such a solution makes the life of 
>>> client developers harder.
>>> I personally think, we as a community need to understand the pros and cons 
>>> of both approaches. I also think we have not even come close to this point, 
>>> which, in my option, is the prerequisite for making informed decisions.
>> Agreed. It's clear that there are a number of parties coming at this from a 
>> number of different directions, and that's coloring our perceptions. That's 
>> why I think we need to nail down the scope of what we're trying to solve 
>> with DPoP before we can have a productive conversation how it should work.
> 
> We will do so.
> 
>> –
>> Annabelle Richard Backman
>> AWS Identity
>> On 11/22/19, 10:51 PM, "Torsten Lodderstedt" <tors...@lodderstedt.net> 
>> wrote:
>>>> On 22. Nov 2019, at 22:12, Richard Backman, Annabelle 
>>>> <richanna=40amazon....@dmarc.ietf.org> wrote:
>>> The service provider doesn't own the entire connection. They have no 
>>> control over corporate or government TLS gateways, or other terminators 
>>> that might exist on the client's side. In larger organizations, or when 
>>> cloud hosting is involved, the service team may not even own all the hops 
>>> on their side.
>> how are cookies protected from leakage, replay, injection in a setup like 
>> this?
>>> While presumably they have some trust in them, protection against leaked 
>>> bearer tokens is an attractive defense-in-depth measure.
>> That’s a totally valid point. But again, such a solution makes the life of 
>> client developers harder.
>> I personally think, we as a community need to understand the pros and cons 
>> of both approaches. I also think we have not even come close to this point, 
>> which, in my option, is the prerequisite for making informed decisions.
>>> –
>>> Annabelle Richard Backman
>>> AWS Identity
>>> On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" 
>>> <oauth-boun...@ietf.org on behalf of 
>>> torsten=40lodderstedt....@dmarc.ietf.org> wrote:
>>>> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle 
>>>> <richanna=40amazon....@dmarc.ietf.org> wrote:
>>>> The dichotomy of "TLS working" and "TLS failed" only applies to a single 
>>>> TLS connection. In non-end-to-end TLS environments, each TLS terminator 
>>>> between client and RS introduces additional token leakage/exfiltration 
>>>> risk, irrespective of the quality of the TLS connections themselves. Each 
>>>> terminator also introduces complexity for implementing mTLS, Token 
>>>> Binding, or any other TLS-based sender constraint solution, which means 
>>>> developers with non-end-to-end TLS use cases will be more likely to turn 
>>>> to DPoP.
>>> The point is we are talking about different developers here. The client 
>>> developer does not need to care about the connection between proxy and 
>>> service. She relies on the service provider to get it right. So the 
>>> developers (or DevOps or admins) of the service provider need to ensure end 
>>> to end security. And if the path is secured once, it will work for all 
>>> clients.
>>>> If DPoP is intended to address "cases where neither mTLS nor OAuth Token 
>>>> Binding are available" [1], then it should address this risk of token 
>>>> leakage between client and RS. If on the other hand DPoP is only intended 
>>>> to support the SPA use case and assumes the use of end-to-end TLS, then 
>>>> the document should be updated to reflect that.
>>> I agree.
>>>> [1]: https://tools.ietf.org/html/draft-fett-oauth-dpop-03#section-1
>>>> –
>>>> Annabelle Richard Backman
>>>> AWS Identity
>>>> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" 
>>>> <oauth-boun...@ietf.org on behalf of 
>>>> torsten=40lodderstedt....@dmarc.ietf.org> wrote:
>>>> Hi Neil,
>>>>> On 22. Nov 2019, at 18:08, Neil Madden <neil.mad...@forgerock.com> wrote:
>>>>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
>>>>> <torsten=40lodderstedt....@dmarc.ietf.org> wrote:
>>>>>>> On 22. Nov 2019, at 15:24, Justin Richer <jric...@mit.edu> wrote:
>>>>>>> I’m going to +1 Dick and Annabelle’s question about the scope here. 
>>>>>>> That was the one major thing that struck me during the DPoP discussions 
>>>>>>> in Singapore yesterday: we don’t seem to agree on what DPoP is for. 
>>>>>>> Some (including the authors, it seems) see it as a quick point-solution 
>>>>>>> to a specific use case. Others see it as a general PoP mechanism.
>>>>>>> If it’s the former, then it should be explicitly tied to one specific 
>>>>>>> set of things. If it’s the latter, then it needs to be expanded.
>>>>>> as a co-author of the DPoP draft I state again what I said yesterday: 
>>>>>> DPoP is a mechanism for sender-constraining access tokens sent from SPAs 
>>>>>> only. The threat to be prevented is token replay.
>>>>> I think the phrase "token replay" is ambiguous. Traditionally it refers 
>>>>> to an attacker being able to capture a token (or whole requests) in use 
>>>>> and then replay it against the same RS. This is already protected against 
>>>>> by the use of normal TLS on the connection between the client and the RS. 
>>>>> I think instead you are referring to a malicious/compromised RS replaying 
>>>>> the token to a different RS - which has more of the flavour of a man in 
>>>>> the middle attack (of the phishing kind).
>>>> I would argue TLS basically prevents leakage and not replay. The threats 
>>>> we try to cope with can be found in the Security BCP. There are multiple 
>>>> ways access tokens can leak, including referrer headers, mix-up, open 
>>>> redirection, browser history, and all sorts of access token leakage at the 
>>>> resource server
>>>> Please have a look at 
>>>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
>>>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8
>>>>  also has an extensive discussion of potential counter measures, including 
>>>> audience restricted access tokens and a conclusion to recommend sender 
>>>> constrained access tokens over other mechanisms.
>>>>> But if that's the case then there are much simpler defences than those 
>>>>> proposed in the current draft:
>>>>> 1. Get separate access tokens for each RS with correct audience and 
>>>>> scopes. The consensus appears to be that this is hard to do in some 
>>>>> cases, hence the draft.
>>>> How many deployments do you know that today are able to issue RS-specific 
>>>> access tokens?
>>>> BTW: how would you identify the RS?
>>>> I agree that would be an alternative and I’m a great fan of such tokens 
>>>> (and used them a lot at Deutsche Telekom) but in my perception this 
>>>> pattern needs still to be established in the market. Moreover, they 
>>>> basically protect from a rough RS (if the URL is used as audience) 
>>>> replaying the token someplace else, but they do not protect from all other 
>>>> kinds of leakage/replay (e.g. log files).
>>>>> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of 
>>>>> the RS. This stops the token being reused elsewhere but the client can 
>>>>> reuse it (replay it) for many requests.
>>>>> 3. Issue a macaroon-based access token and the client can add a correct 
>>>>> audience and scope restrictions at the point of use.
>>>> Why is this needed if the access token is already audience restricted? Or 
>>>> do you propose this as alternative?
>>>>> Protecting against the first kind of replay attacks only becomes an issue 
>>>>> if we assume the protections in TLS have failed. But if DPoP is only 
>>>>> intended for cases where mTLS can't be used, it shouldn't have to protect 
>>>>> against a stronger threat model in which we assume that TLS security has 
>>>>> been lost.
>>>> I agree.
>>>> best regards,
>>>> Torsten.
>>>>> -- Neil

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to