On Dec 3, 2014, at 9:03 AM, Thomas Broyer 
<t.bro...@gmail.com<mailto:t.bro...@gmail.com>> wrote:

On Tue Dec 02 2014 at 19:53:27 Richer, Justin P. 
<jric...@mitre.org<mailto:jric...@mitre.org>> wrote:
Thomas, thanks for the review. Responses inline.

On Dec 2, 2014, at 11:08 AM, Thomas Broyer 
<t.bro...@gmail.com<mailto:t.bro...@gmail.com>> wrote:


   The methods of managing and
   validating these authentication credentials are out of scope of this
   specification, though it is RECOMMENDED that these credentials be
   distinct from those used at an authorization server's token endpoint.

and later in the Security Considerations section:


   The authorization server SHOULD issue credentials to any
   protected resources that need to access the introspection endpoint,
   SHOULD require protected resources to be specifically authorized to
   call the introspection endpoint, and SHOULD NOT allow a single piece
   of software acting as both a client and a protected resource to re-
   use the same credentials between the token endpoint and the
   introspection endpoint.

Could you expand on the RECOMMENDED and SHOULD NOT here?
What would be the problem with using the same credentials? What's the trade-off?

Different credentials for different purposes, and it lets you manage things 
separately at the server. In other words, you've got one class of thing that 
*gets* tokens, and one class of thing that *accepts* tokens. The dynamic 
resource registration draft doesn't presume client credentials at all, since a 
resource might not (and in many cases is not) also an OAuth client. This draft 
even uses tokens to authorize its calls to the introspection endpoint, which 
was suggested as MTI in another thread.

Additionally, and this may be getting unnecessarily colored by our own 
implementation and deployment of pre-WG drafts: we have it currently 
implemented such that both are clients (and Ping does something similar with 
their own method of accomplishing the same thing), and we want to start to keep 
these classes separate. We've found that developers get confused about whether 
they're a client or a resource or whatnot as it is. This recommendation helps 
keep the roles separate logically, though servers are of course free to throw 
everyone in the same bucket if they so choose.

That explains why you *could* use different credentials, not why you should do 
it.

Until June this year, we were issuing distinct credentials for the "client" and 
"resource server" parts of applications (what we used to call "service 
provider" vs. "data provider"), and people didn't understand what each "part" 
meant (and that's knowing that most of them don't currently expose APIs 
themselves!)
We thus moved to a single set of credentials that's shared between all the 
clients (e.g. "front office" vs. "back office") and resource servers (TBH, 
having distinct credentials for the distinct clients could be challenging in 
some cases, so that "simplification" was also needed for other reasons).

OK, that's fair. We can back off the first RECOMMENDED easily enough but I 
would like to keep some kind of note in the security considerations section. I 
don't want to necessarily encourage people to conflate clients and protected 
resources. Can you help with wording this so that it's not too prescriptive?




   The response MAY be cached by the protected resource, and the
   authorization server SHOULD communicate appropriate cache controls
   using applicable HTTP headers.


Reading through https://tools.ietf.org/html/rfc7234 (and 
https://tools.ietf.org/html/rfc7231), it's not clear to me how cache headers 
would really help, given that the requests to the introspection endpoint are 
mostly using the POST method ("optionally" a GET method, and the Security 
Considerations section somehow discourages it).
You'd want to check with the HTTPWG but maybe this text should define what the 
cache-key would be (it would at least include the token and resource_id if 
provided, maybe also the token_type_hint), and that the response SHOULD NOT 
have Cache-Control:public or even s-maxage (for the same reason that it should 
be protected by authentication).
I'd actually rather say that the RS may cache the response (we're talking about 
an "application-level cache" here, not an HTTP cache), and probably should do 
it for a small amount of time; and possibly (not sure how well that would fit 
here) hint that the AS could very well return an HTTP 429 (Too Many Requests) 
<https://tools.ietf.org/html/rfc6585> if it somehow detects that the RS doesn't 
use a (application-level) cache (e.g. asks many times for the same token in a 
very short time frame). This is the kind of things I could very well add to my 
implementation later on if we ever see a very high number of requests on our 
introspection endpoint (because looking up a key-value store using the token as 
key is much faster than validating the token – our tokens are base64url-encoded 
JSON structures containing an ID and a salt, and we store the ID and a hash in 
our datastore; validating a token thus involves decoding base64url, parsing 
JSON and computing a hash, in addition to looking it up in the datastore and 
validating "iat" and "exp").

All that we're really trying to say here is that the protected resource is 
allowed to cache the response if it wants to, and that the AS could give some 
hints as to how to do it. I can pull out the HTTP-cache-mechanism language if 
it's just confusing the matter (which I suspect it is).

Yes please.

Will do, thanks.


In one deployment profile I've written of this, we say that the RS can cache 
the introspection result for up to half the token lifetime, given by the 'exp' 
claim (which we also require in the profile).

Caching the response on the RS is a trade-off between accuracy (and thus 
security, as you might detect a revoked token long after it's been revoked) and 
performance (HTTP roundtrips, putting too much pressure on the AS, etc.).
This trade-off has to be considered on a per-RS basis, depending on the kind 
(and sensitivity) of data being managed by the RS. For very sensitive data, 
you'd likely have a very short cache TTL (or no cache at all, apart from 
avoiding concurrent requests to the AS for the same token), whereas for 
"semi-public" data you'd likely trade security for performance (e.g. data is 
likely to be accessed in several –possibly many– requests, and/or you're using 
authentication mostly to trace accesses –and possibly charge for them–, or to 
check "who" has access, but you don't really care "when" it has access).
This trade-off should probably be mentioned in the Security Considerations, but 
I don't think giving guidance about how long to cache an introspection response 
would be useful (could even be harmful as people could just follow the 
suggestion without really thinking about the implications).

Yes, that's exactly the tradeoff. We can move the discussion to the security 
considerations, and just mention that the response MAY be cached up above. 
Would you be willing to contribute text for this item as well?

And, thanks for your thorough review. Feedback from real implementors is always 
best, and it helps ensure a spec doesn't devolve into hand-waving.

 -- Justin
_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to