On 28 May 2010 02:21, Brian Eaton <bea...@google.com> wrote:
> OAuth 1.0 was unusual in that it required that the server match a hash
> of the URL, rather than the real URL.  It's an extra layer of
> indirection and complexity.  It doesn't improve security.

To be more precise, OAuth 1.0 required that the server match a
normalised form of the URL. You're absolutely correct that it doesn't
improve security [over matching the URL], but it *is* more secure than
either not proving that the token bearer provided the URL in the first
place or having the client and server match potentially different
versions of the URL.

This is a problem of leaky abstractions: if HTTP was used in a way
such that the client unequivocally asserted "This: {x} is the
unabridged HTTP URL that I am requesting", and such that {x} was
presented untouched to the service handling the request, then we
wouldn't have to worry about normalisation.

As it stands, getting access to the raw request URL is relatively
difficult in many environments that handle HTTP requests, and even
more difficult to obtain from HTTP client libraries, since the actual
request URI is often constructed in a private method at the last
moment before a request is actually made.

Which is all to say that it is indeed complex, but much of that
complexity is a result of HTTP libraries trying to hide complexity
from users. I'd echo Roy's assertion that as library support improves,
approaches to URL normalisation will become hidden behind the same
layers of abstraction as constructing query strings and request URIs
are today.

b.
_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to