On Fri, Sep 12, 2014 at 6:07 PM, Trevor Saunders
<trev.saund...@gmail.com> wrote:
>  Do we really want all servers to have to authenticate themselves?

On the level of DV, yes, I think. (I.e. the user has a good reason to
believe that the [top-level] page actually comes from the host named
in the location bar.)

>  In
>  most cases they probably should, but I suspect there are cases where
>  you want to run a server, but have plausable deniability.  I haven't
>  gone looking for legal precedent, but it seems to me cryptographically
>  signing material makes it much harder to reasonably believe a denial.

It seems to me this concern would have more weight if you actually had
found precedent of someone successfully repudiating what they've
allegedly served on the grounds of the absence of authenticated https.

(In general, the way things work is that the absence of cryptographic
evidence doesn't create enough doubt. Whenever there is a scandal over
a famous person's SMSs, those SMSs haven't been cryptographically
signed...)

>> Is it really the right call for the Web to let people get the
>> performance characteristics without making them do the right thing
>> with authenticity (and, therefore, integrity and confidentiality)?
>>
>> On the face of things, it seems to me we should be supporting HTTP/2
>> only with https URLs even if one buys Theodore T'so's reasoning about
>> anonymous ephemeral Diffie–Hellman.
>>
>> The combination of
>> https://twitter.com/sleevi_/status/509954820300472320 and
>> http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
>> is pretty alarming.
>
> I agree that's bad, but I tend to believe anonymous ephemeral
> Diffie–Hellman is good enough to deal with the Comcat's of the world,

I agree that anonymous ephemeral Diffie–Hellman as the baseline would
probably reduce ISP MITMing by making it more costly. My point is that
with https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00
, the baseline isn't anonymous ephemeral Diffie–Hellman but
unencrypted HTTP 1.1. If a major American ISP has the capacity to
inject some JS into HTTP 1.1 for all users, they definitely have the
capacity to strip a header from HTTP 1.1 (to make the upgrade to
HTTP/2 not take place) *and* inject some JS for all users. It would
have a performance impact on those connections (the delta between HTTP
1.1 and HTTP/2), but it seems that you get to remain a major American
ISP even if you are widely perceived as providing slow connections...

(Note that ad injection can happen on the edge and the logic of having
to perform operations on Internet exchange traffic volumes doesn't
apply. Making a copy of all traffic on the edge is harder, since
there's a need to move the copy somewhere from the edge. However, if
the edge makes sure the connections never upgrade in order to keep
doing HTTP 1.1 ad injection, then the connection is unupgraded at all
hops, including the hops that are suitable for moving a copy
elsewhere.)

On Fri, Sep 12, 2014 at 7:06 PM, Martin Thomson <m...@mozilla.com> wrote:
> The view that encryption is expensive is a prevailing meme, and it’s 
> certainly true that some sites have reasons not to want the cost of TLS, but 
> the costs are tiny, and getting smaller 
> (https://www.imperialviolet.org/2011/02/06/stillinexpensive.html).  I will 
> concede that certain outliers will exist where this marginal cost remains 
> significant (Netflix, for example), but I don’t think that’s generally 
> applicable.  As the above post shows, it’s not that costly (even less on 
> modern hardware).  And HTTP/2 and TLS 1.3 will remove a lot of the 
> performance concerns.

Yeah, I think the best feature of
https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 is
that anyone who deploys it loses the argument that they can't deploy
https due to TLS being too slow (since they already deployed TLS--just
not with publicly trusted certs).

> The current consensus view in the IETF (at least) is that all or nothing 
> approach has not done enough to materially improve security.

It's worth noting that the historical data is from a situation where
you have two alternatives: one one hand unencrypted and
unauthenticated and on the other hand encrypted and authenticated and
the latter is always slower (maybe not slower enough to truly
technically matter but truly slower so that anyone who ignores the
magnitude of how much slower can always make a knee-jerk decision not
to use the "slower" thing).

What the Chrome folks suggest for HTTP/2 would give rise to a
situation where your alternatives are still one one hand unencrypted
and unauthenticated and on the other hand encrypted and authenticated
*but* the latter is *faster*. So the performance argument is reversed
compared to the historical data. What if the IETF consensus is based
on an attribution error and the historical data is actually
attributable to the speed difference (not the magnitude but to the
perception that there's a difference) instead of being attributable to
the hardship of proper certs?

You mess up that reversal of the speed argument if you let
unauthenticated be as fast as authenticated.

> One reason that you missed for the -encryption draft is the problem with 
> content migration.  A great many sites have a lot of content with http:// 
> origins that can’t easily be rewritten.

Do you mean third-party embedded content that would turn into mixed
content? First-party legacy content is easy to "fix" with HSTS.

> And the restrictions on the Referer header field also mean that some 
> resources can’t be served over HTTPS (their URL shortener is apparently the 
> last hold-out for http:// at Twitter).  There are options in -encryption for 
> authentication that can be resistant to some active attacks.

In addition to what Anne says, it might be worthwhile to stop sending
Referer in the cross-origin http case. The sites that break if you do
so are few enough that it might just be feasible.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to