Brian, >> Consider a generic search spider tool that you point at >> http://calendar.serviceprovider.com/calendar/get. It can do its job with no >> knowledge about what "calendar.get" means -- but it still needs to know (as >> it spiders along) when it is safe to expose the token.
> I'm a bit confused by this example. > > James, can you explain what you mean by "generic search spider tool"? A tool that builds a search index. You point it a URI; it fetches the content; indexes it; follows any links in the content to more content; indexes that; and continues. The tool understands HTTP; it knows how to find links in common media types (<a href=...>, <link ...>, etc); but it doesn't have much API-specific knowledge (it doesn't know or care if it is indexing a calendar, a personal blog, a social graph, a doc repository, all of the above etc). If some of the content requires user consent to access (ie returns WWW-Auth.: Token user-uri="..."), the tool performs an OAuth flow and continues. The tool needs some rule so it doesn't try to index the whole Internet. For example: index at most 500 pages; download no more than 10MB; finish in 5 min; only following links to a depth of 3; stay within example.com. This rule does not necessarily have anything to do with any security boundaries. The crucial features of the tool are that it knows enough about HTTP and data formats to follow redirects & links; but it doesn't have service-specific knowledge to know understand service-specific scopes (eg "calendar.get") or the boundaries of specific APIs. There are lots of tools in this category. It matches the architecture of the web. Other examples of such tools might be: * a backup tool -- point it at your atom feeds and it copies the content (and the linked stylesheets, scripts, images...) * perhaps cURL -- do anything on the web * a web browser I hope this clears some confusion. -- James Manger _______________________________________________ OAuth mailing list OAuth@ietf.org https://www.ietf.org/mailman/listinfo/oauth