Most developers will do one of two things:

1. Completely ignore the HTTP request and just look at the signed data.
2. Come up with some simple rules to normalize both the request URI and signed 
data and compare them to make sure they are the same.

Why is this bad?

- This proposal recreates SOAP, only using JSON instead of XML. It treats HTTP 
as a stupid transport where the HTTP fields are more of a liability than an 
asset. This design would work much better if you had a single endpoint API with 
a structured payload that is signed using a magic signature. All this HTTP 
request construction nonsense goes away and you don't need to repeat 
parameters, methods, etc. This is exactly why Roy Fielding, who knows a little 
bit about HTTP, rejected this approach of duplicating request data in the 
transmission.

- At the time the application gets to look at the signed data and the HTTP 
request header, it is already too late for outer security layers to do their 
jobs. A firewall configured to block requests to certain URI paths or HTTP 
methods will fail to stop requests were the internal signed data is different 
(and potentially harmful) than the external HTTP request header. This moved the 
burden of enforcing such a security policy from the firewall to the application 
layer aware of the OAuth signature format.

- Any kind of URI normalization and comparison is hard, and will result in some 
failures. OAuth 1.0a has shown this difficulty, but recent experience has 
completely reversed the narrative this proposal is based on, that developers 
cannot figure it out. As I said many times before, developers just need the 
right motivation to spend an extra hour to figure out why their signatures fail 
(yes, an hour!) and providers should stop treating their developers badly and 
provide quality debugging information.

- I am pretty sure that the vast majority of platforms these days allow the 
application full access to the raw request header as sent over the wire. Before 
we go and design another solution like the OAuth 1.0a signature process which 
was based on limitations in platforms such as PHP4, we should do our homework 
and determine how hard it really is to access the raw request header. This 
proposal is an optimization to a problem that does not really exist.

- OAuth 2.0 now fully supports bearer tokens which removes the requirement to 
use signatures. I think it is perfectly fine to raise the deployment bar for 
the more advance cases where signatures are required. Let's be honest here - 
most of the applications currently using OAuth 1.0a will be happy with 2.0 
without signatures.

- By working closer to the HTTP layer, we not only improve HTTP, but also give 
one more reason for HTTP libraries to add native OAuth signature support.

This is not a complete list.

EHL


On 9/24/10 11:38 AM, "Dirk Balfanz" <balf...@google.com> wrote:



On Fri, Sep 24, 2010 at 10:08 AM, Richard L. Barnes <rbar...@bbn.com> wrote:
I think it's more robust to verify than to generate. In option 2, you have to 
"guess" what the signer actually signed. To be fair, you have pretty good 
signals, like the HTTP request line, the Host: header, etc., but in the end, 
you don't _know_ that the signer really saw the same thing when they generated 
the signature. I can't help but feeling that that's a bit of a hack. In option 
1, you always know what the signer saw when they generated the signature, and 
it's up to you (the verifier) to decide whether that matches your idea of what 
your endpoint looks like.

Generating does not imply guessing: The signer can specify what he signed 
without providing the data directly.  Quoting from another thread:
"
1. Signer computes signature sig_val over data object:
  { user_agent: "Mozilla", method: "GET" }
2. Signer sends { signed_fields: ['user_agent', 'method'], sig: sig_val }
3. Recipient reconstructs data object using signed_fields
4. Recipient verifies sig_val == sign(reconstructed_object)
"

If the spec is written properly, the recipient should be able to look at the 
names of the fields ('user_agent', 'method') and use them to reconstruct the 
original object.

User-agent and method are well-defined both for senders and receivers of HTTP 
requests. What's less well-defined is the URL, which is what Eran is objecting 
to. So in practice, it looks more like this:

1. Signer generates URL using some library, e.g.:
    paramsMap = new Map();
    paramsMap.put('param1', 'value1');
    paramsMap.put('param2', 'value2');

    uri = new UriBuilder()
      .setScheme(Scheme.HTTP)
      .setHost('WWW.foo.com <http://WWW.foo.com> ')
      .setPath('/somePath')
      .setQueryParams(paramsMap)
      .build().toString();
   // uri now looks something like 
"http://WWW.foo.com/somePath?param1=value1&param2=value2";

2. They then use a different library to send the HTTP request
    request = new GetRequest();
    request.setHeader('signed-token', sign('GET', uri));
    request.execute(uri);

The problem is that we don't know what the execute method on GetRequest does 
with the URI. It probably will use a library (possibly different from the one 
used in step 1) to decompose the URI back into its parts, so it can figure out 
whether to use SSL, which host and port to connect to, etc. Is it going to 
normalize the hostname to lowercase in the process? Is it going to escape the 
query parameters? Is it going to add ":80" to the Host:-header because that's 
the port it's going to connect to? Is it going to put the query parameters into 
a different order?, etc., all of which would cause the recipient of the message 
to put back together a _different_ URI from what the sender saw.

OAuth1 therefore defined a bunch of rules on how to "normalize" the URI to make 
sure that both the sender and the receiver saw the same URI even if the http 
library does something funny. Many people thought that those rules were too 
complicated. There is currently an argument over whether or not the complexity 
of the rules can be hidden in libraries, and I'm personally a bit on the fence 
on this. What I _do_ object to, more on a philosophical level, is that we can 
never know for sure what the http library is doing to the request, and that 
therefore we can never be sure whether the normalization rules we have come up 
with cover all the crazy libraries out there. There is a symmetric problem on 
the receiver side - where the servlet APIs may or may not have messed with the 
parameters before you get to reconstruct the URI.

The JSON token proposal does something simpler: you get to see the URI as the 
sender saw it (in this case with the uppercase WWW, without the :80, etc.), and 
you get to decide whether that matches your endpoint. So instead of wondering 
in what order the signer saw the query parameters when he signed them, and 
whether they were escaped or not, you simply check that all the query 
parameters that he signed (as evidenced in the JSON token) are indeed present 
in the HTTP request, and vice versa, etc. It's a comparable amount of work, but 
it seems cleaner, less hacky to me.

Dirk.


The idea of allowing signed fields to change en route to the server strikes me 
as a little odd.  Sure, you could ignore the method, path, and host values in 
HTTP and just act on the enveloped data, but at that point, why not just do 
away with the overhead of HTTP and run the whole thing over TCP?

--Richard


_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to