Intent to ship: RC4 disabled by default in Firefox 44

2015-09-01 Thread Richard Barnes
For a while now, we have been progressively disabling the known-insecure
RC4 cipher [0].  The security team has been discussing with other the
browser vendors when to turn off RC4 entirely, and there seems to be
agreement to take that action in late January / early February 2016,
following the release schedules of the various browsers.  For Firefox, that
means version 44, currently scheduled for release on Jan 26.

More details below.


# Current status

Since Firefox 37, RC4 has been partly disabled in Firefox.  It still works
in Beta and Release, but in Nightly and Aurora, it is allowed only for a
static whitelist of hosts [1][2].  Note that the whitelist is not
systematic; it was mainly built from compatibility bugs.

RC4 support is controlled by three preferences:

* security.tls.unrestricted_rc4_fallback - Allows use of RC4 with no
restrictions
* security.tls.insecure_fallback_hosts.use_static_list - Allow RC4 for
hosts on the static whitelist
* security.tls.insecure_fallback_hosts - A list of hosts for which RC4 is
allowed (empty by default)


# Proposal

The proposed plan is to gradually reduce RC4 support by making the default
values of these preferences more restrictive:

* 42/ASAP: Disable whitelist in Nightly/Aurora; no change in Beta/Release
* 43: Disable unrestricted fallback in Beta/Release (thus allowing RC4 only
for whitelisted hosts)
* 44: Disable all RC4 prefs by default, in all releases

That is, as of Firefox 44, RC4 will be entirely disabled unless a user
explicitly enables it through one of the prefs.


# Compatibility impact

Disabling RC4 will mean that Firefox will no longer connect to servers that
require RC4.  The data we have indicate that while there are still a small
number of such servers, Firefox users encounter them at very low rates.

Telemetry indicates that in the Beta and Release populations, which have no
restrictions on RC4 usage, RC4 is used for around 0.08% for Release and
around 0.05%  for Beta [3][4].  For Nightly and Aurora, which are
restricted to the whitelist, the figure is more like 0.025% [5].  These
numbers are small enough that the histogram viewer on telemetry.mozilla.org
won't show them (that's why the below references are to my own telemetry
timeline tool, rather than telemetry.mozilla.org).

That said, there is a small but measurable population of servers out there
that require RC4.  Scans by Mozilla QA team find that with current Aurora
(whitelist enabled), around 0.41% of their test set require RC4, 820 sites
out of 211k.  Disabling the whitelist only results in a further 26 sites
broken, totaling 0.4% of sites.  I have heard some rumors about there being
a higher prevalence of RC4 among enterprise sites, but have no data to
support this.

Users can still enable RC4 in any case by changing the above prefs, either
by turning on RC4 in general or by  adding specific hosts to the
"insecure_fallback_hosts" whitelist.  The security and UX teams are
discussing possibilities for UI that would automate whitelisting of sites
for users.

[0] https://tools.ietf.org/html/rfc7465
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1128227
[2]
https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/IntolerantFallbackList.inc
[3]
https://ipv.sx/telemetry/general-v2.html?channels=release&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
[4]
https://ipv.sx/telemetry/general-v2.html?channels=beta&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
[5]
https://ipv.sx/telemetry/general-v2.html?channels=nightly%20aurora&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: RC4 disabled by default in Firefox 44

2015-09-01 Thread Richard Barnes
Speaking of other browsers, the corresponding Chromium thread is here:

https://groups.google.com/a/chromium.org/forum/#!msg/security-dev/kVfCywocUO8/vgi_rQuhKgAJ

On Tue, Sep 1, 2015 at 12:56 PM, Richard Barnes  wrote:

> For a while now, we have been progressively disabling the known-insecure
> RC4 cipher [0].  The security team has been discussing with other the
> browser vendors when to turn off RC4 entirely, and there seems to be
> agreement to take that action in late January / early February 2016,
> following the release schedules of the various browsers.  For Firefox, that
> means version 44, currently scheduled for release on Jan 26.
>
> More details below.
>
>
> # Current status
>
> Since Firefox 37, RC4 has been partly disabled in Firefox.  It still works
> in Beta and Release, but in Nightly and Aurora, it is allowed only for a
> static whitelist of hosts [1][2].  Note that the whitelist is not
> systematic; it was mainly built from compatibility bugs.
>
> RC4 support is controlled by three preferences:
>
> * security.tls.unrestricted_rc4_fallback - Allows use of RC4 with no
> restrictions
> * security.tls.insecure_fallback_hosts.use_static_list - Allow RC4 for
> hosts on the static whitelist
> * security.tls.insecure_fallback_hosts - A list of hosts for which RC4 is
> allowed (empty by default)
>
>
> # Proposal
>
> The proposed plan is to gradually reduce RC4 support by making the default
> values of these preferences more restrictive:
>
> * 42/ASAP: Disable whitelist in Nightly/Aurora; no change in Beta/Release
> * 43: Disable unrestricted fallback in Beta/Release (thus allowing RC4
> only for whitelisted hosts)
> * 44: Disable all RC4 prefs by default, in all releases
>
> That is, as of Firefox 44, RC4 will be entirely disabled unless a user
> explicitly enables it through one of the prefs.
>
>
> # Compatibility impact
>
> Disabling RC4 will mean that Firefox will no longer connect to servers
> that require RC4.  The data we have indicate that while there are still a
> small number of such servers, Firefox users encounter them at very low
> rates.
>
> Telemetry indicates that in the Beta and Release populations, which have
> no restrictions on RC4 usage, RC4 is used for around 0.08% for Release and
> around 0.05%  for Beta [3][4].  For Nightly and Aurora, which are
> restricted to the whitelist, the figure is more like 0.025% [5].  These
> numbers are small enough that the histogram viewer on
> telemetry.mozilla.org won't show them (that's why the below references
> are to my own telemetry timeline tool, rather than telemetry.mozilla.org).
>
> That said, there is a small but measurable population of servers out there
> that require RC4.  Scans by Mozilla QA team find that with current Aurora
> (whitelist enabled), around 0.41% of their test set require RC4, 820 sites
> out of 211k.  Disabling the whitelist only results in a further 26 sites
> broken, totaling 0.4% of sites.  I have heard some rumors about there being
> a higher prevalence of RC4 among enterprise sites, but have no data to
> support this.
>
> Users can still enable RC4 in any case by changing the above prefs, either
> by turning on RC4 in general or by  adding specific hosts to the
> "insecure_fallback_hosts" whitelist.  The security and UX teams are
> discussing possibilities for UI that would automate whitelisting of sites
> for users.
>
> [0] https://tools.ietf.org/html/rfc7465
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1128227
> [2]
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/IntolerantFallbackList.inc
> [3]
> https://ipv.sx/telemetry/general-v2.html?channels=release&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
> [4]
> https://ipv.sx/telemetry/general-v2.html?channels=beta&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
> [5]
> https://ipv.sx/telemetry/general-v2.html?channels=nightly%20aurora&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: RC4 disabled by default in Firefox 44

2015-09-01 Thread Richard Barnes
And from Microsoft:

http://blogs.windows.com/msedgedev/2015/09/01/ending-support-for-the-rc4-cipher-in-microsoft-edge-and-internet-explorer-11/

On Tue, Sep 1, 2015 at 1:03 PM, Richard Barnes  wrote:

> Speaking of other browsers, the corresponding Chromium thread is here:
>
>
> https://groups.google.com/a/chromium.org/forum/#!msg/security-dev/kVfCywocUO8/vgi_rQuhKgAJ
>
>
> On Tue, Sep 1, 2015 at 12:56 PM, Richard Barnes 
> wrote:
>
>> For a while now, we have been progressively disabling the known-insecure
>> RC4 cipher [0].  The security team has been discussing with other the
>> browser vendors when to turn off RC4 entirely, and there seems to be
>> agreement to take that action in late January / early February 2016,
>> following the release schedules of the various browsers.  For Firefox, that
>> means version 44, currently scheduled for release on Jan 26.
>>
>> More details below.
>>
>>
>> # Current status
>>
>> Since Firefox 37, RC4 has been partly disabled in Firefox.  It still
>> works in Beta and Release, but in Nightly and Aurora, it is allowed only
>> for a static whitelist of hosts [1][2].  Note that the whitelist is not
>> systematic; it was mainly built from compatibility bugs.
>>
>> RC4 support is controlled by three preferences:
>>
>> * security.tls.unrestricted_rc4_fallback - Allows use of RC4 with no
>> restrictions
>> * security.tls.insecure_fallback_hosts.use_static_list - Allow RC4 for
>> hosts on the static whitelist
>> * security.tls.insecure_fallback_hosts - A list of hosts for which RC4 is
>> allowed (empty by default)
>>
>>
>> # Proposal
>>
>> The proposed plan is to gradually reduce RC4 support by making the
>> default values of these preferences more restrictive:
>>
>> * 42/ASAP: Disable whitelist in Nightly/Aurora; no change in Beta/Release
>> * 43: Disable unrestricted fallback in Beta/Release (thus allowing RC4
>> only for whitelisted hosts)
>> * 44: Disable all RC4 prefs by default, in all releases
>>
>> That is, as of Firefox 44, RC4 will be entirely disabled unless a user
>> explicitly enables it through one of the prefs.
>>
>>
>> # Compatibility impact
>>
>> Disabling RC4 will mean that Firefox will no longer connect to servers
>> that require RC4.  The data we have indicate that while there are still a
>> small number of such servers, Firefox users encounter them at very low
>> rates.
>>
>> Telemetry indicates that in the Beta and Release populations, which have
>> no restrictions on RC4 usage, RC4 is used for around 0.08% for Release and
>> around 0.05%  for Beta [3][4].  For Nightly and Aurora, which are
>> restricted to the whitelist, the figure is more like 0.025% [5].  These
>> numbers are small enough that the histogram viewer on
>> telemetry.mozilla.org won't show them (that's why the below references
>> are to my own telemetry timeline tool, rather than telemetry.mozilla.org
>> ).
>>
>> That said, there is a small but measurable population of servers out
>> there that require RC4.  Scans by Mozilla QA team find that with current
>> Aurora (whitelist enabled), around 0.41% of their test set require RC4, 820
>> sites out of 211k.  Disabling the whitelist only results in a further 26
>> sites broken, totaling 0.4% of sites.  I have heard some rumors about there
>> being a higher prevalence of RC4 among enterprise sites, but have no data
>> to support this.
>>
>> Users can still enable RC4 in any case by changing the above prefs,
>> either by turning on RC4 in general or by  adding specific hosts to the
>> "insecure_fallback_hosts" whitelist.  The security and UX teams are
>> discussing possibilities for UI that would automate whitelisting of sites
>> for users.
>>
>> [0] https://tools.ietf.org/html/rfc7465
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1128227
>> [2]
>> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/IntolerantFallbackList.inc
>> [3]
>> https://ipv.sx/telemetry/general-v2.html?channels=release&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
>> [4]
>> https://ipv.sx/telemetry/general-v2.html?channels=beta&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
>> [5]
>> https://ipv.sx/telemetry/general-v2.html?channels=nightly%20aurora&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: RC4 disabled by default in Firefox 44

2015-09-11 Thread Richard Barnes
Hearing no objections, let's consider this the plan of record.

Thanks,
--Richard

On Tue, Sep 1, 2015 at 12:56 PM, Richard Barnes  wrote:

> For a while now, we have been progressively disabling the known-insecure
> RC4 cipher [0].  The security team has been discussing with other the
> browser vendors when to turn off RC4 entirely, and there seems to be
> agreement to take that action in late January / early February 2016,
> following the release schedules of the various browsers.  For Firefox, that
> means version 44, currently scheduled for release on Jan 26.
>
> More details below.
>
>
> # Current status
>
> Since Firefox 37, RC4 has been partly disabled in Firefox.  It still works
> in Beta and Release, but in Nightly and Aurora, it is allowed only for a
> static whitelist of hosts [1][2].  Note that the whitelist is not
> systematic; it was mainly built from compatibility bugs.
>
> RC4 support is controlled by three preferences:
>
> * security.tls.unrestricted_rc4_fallback - Allows use of RC4 with no
> restrictions
> * security.tls.insecure_fallback_hosts.use_static_list - Allow RC4 for
> hosts on the static whitelist
> * security.tls.insecure_fallback_hosts - A list of hosts for which RC4 is
> allowed (empty by default)
>
>
> # Proposal
>
> The proposed plan is to gradually reduce RC4 support by making the default
> values of these preferences more restrictive:
>
> * 42/ASAP: Disable whitelist in Nightly/Aurora; no change in Beta/Release
> * 43: Disable unrestricted fallback in Beta/Release (thus allowing RC4
> only for whitelisted hosts)
> * 44: Disable all RC4 prefs by default, in all releases
>
> That is, as of Firefox 44, RC4 will be entirely disabled unless a user
> explicitly enables it through one of the prefs.
>
>
> # Compatibility impact
>
> Disabling RC4 will mean that Firefox will no longer connect to servers
> that require RC4.  The data we have indicate that while there are still a
> small number of such servers, Firefox users encounter them at very low
> rates.
>
> Telemetry indicates that in the Beta and Release populations, which have
> no restrictions on RC4 usage, RC4 is used for around 0.08% for Release and
> around 0.05%  for Beta [3][4].  For Nightly and Aurora, which are
> restricted to the whitelist, the figure is more like 0.025% [5].  These
> numbers are small enough that the histogram viewer on
> telemetry.mozilla.org won't show them (that's why the below references
> are to my own telemetry timeline tool, rather than telemetry.mozilla.org).
>
> That said, there is a small but measurable population of servers out there
> that require RC4.  Scans by Mozilla QA team find that with current Aurora
> (whitelist enabled), around 0.41% of their test set require RC4, 820 sites
> out of 211k.  Disabling the whitelist only results in a further 26 sites
> broken, totaling 0.4% of sites.  I have heard some rumors about there being
> a higher prevalence of RC4 among enterprise sites, but have no data to
> support this.
>
> Users can still enable RC4 in any case by changing the above prefs, either
> by turning on RC4 in general or by  adding specific hosts to the
> "insecure_fallback_hosts" whitelist.  The security and UX teams are
> discussing possibilities for UI that would automate whitelisting of sites
> for users.
>
> [0] https://tools.ietf.org/html/rfc7465
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1128227
> [2]
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/IntolerantFallbackList.inc
> [3]
> https://ipv.sx/telemetry/general-v2.html?channels=release&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
> [4]
> https://ipv.sx/telemetry/general-v2.html?channels=beta&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
> [5]
> https://ipv.sx/telemetry/general-v2.html?channels=nightly%20aurora&measure=SSL_SYMMETRIC_CIPHER_FULL&target=1
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Directory picking and directory drag-and-drop

2015-09-21 Thread Richard Barnes
On Mon, Sep 21, 2015 at 6:58 PM, Jonathan Watt  wrote:

> On 21/09/2015 19:57, Eric Rescorla wrote:
>
>> On Mon, Sep 21, 2015 at 11:23 AM, Jonas Sicking  wrote:
>>
>> Note that this, similarly to clipboard integration, is already exposed
>>> to the web through flash. So the main goal of this feature is to
>>> enable developers to migrate off of flash and instead use Gecko.
>>>
>>>
>> I'm not sure that this is the right standard. The reason that we are
>> removing
>> Flash is that people are sad about some things in Flash. So I think we
>> need
>> to replicate enough of Flash to get people to stop using it, but that
>> doesn't
>> mean we need to have it be bug-for-bug compatible with every feature Flash
>> has, including features we think are bad.
>>
>
> I don't think directory picking is bad - there are many sites with
> legitimate uses. I think it's right that we need to think about the
> security implications though, and members of the security team have been
> looped in to consider these issues.


Who have you been talking to on the security team?  I haven't heard any
discussion of this in our security engineering meetings.  And I share EKR's
concerns here.

Thanks,
--Richard



>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement and ship: FIDO U2F API

2015-12-01 Thread Richard Barnes
The FIDO Alliance has been developing standards for hardware-based
authentication of users by websites [1].  Their work is getting significant
traction, so the Mozilla Foundation has decided to join the FIDO Alliance.
Work has begun in the W3C to create open standards using FIDO as a starting
point. We are proposing to implement the FIDO U2F API in Firefox in its
current form and then track the evolving W3C standard.

Background: The FIDO Alliance has been developing a standard for
hardware-based user authentication known as “Universal Two-Factor” or U2F
[2].  This standard allows a website to verify that a user is in possession
of a specific device by having the device sign a challenge with a private
key that is held on the hardware device.  The browser’s role is mainly (1)
to route messages between the website and the token, and (2) to add the
origin of the website to the message signed by the token (so that the
signature is bound to the site).

Several major websites now support U2F for authentication, including Google
[3], Dropbox [4], and Github [5].  Axel Nennker has filed a Bugzilla bug
for U2F support in Gecko [6].  The W3C has  begun the process of forming a
“WebAuthentication” working group that will work on a standard for enhanced
authentication using FIDO as a starting point [7].

Proposed: To implement the high-level U2F API described in the FIDO JS API
specification, with support for the USB HID token interface.

Please send comments on this proposal to the list no later than Monday,
December 14, 2015.

-

Personally, I have some reservations about implementing this, but I still
think it’s worth doing, given the clear need for something to augment
passwords.

It’s unfortunate that the initial FIDO standards were developed in a closed
group, but there is good momentum building toward making FIDO more open.  I
have some specific concerns about the U2F API itself, but they’re
relatively minor.  For example, the whole system is highly vertically
integrated, so if we want to change any part of it (e.g., to use a curve
other than P-256 for signatures), we’ll need to build a whole new API.  But
these are issues that can be addressed in the W3C process.

We will continue to work on making standards for secure authentication more
open.  In the meantime, U2F is what’s here now, and there’s demonstrated
developer interest, so it makes sense for us to work on implementing it.

Thanks,
--Richard

[1] https://fidoalliance.org/
[2] https://fidoalliance.org/specifications/download/
[3] https://support.google.com/accounts/answer/6103523?hl=en
[4] https://blogs.dropbox.com/dropbox/2015/08/u2f-security-keys/
[5]
https://github.com/blog/2071-github-supports-universal-2nd-factor-authentication
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1065729
[7] http://w3c.github.io/websec/web-authentication-charter
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: FIDO U2F API

2015-12-01 Thread Richard Barnes
It's my understanding that U2F qua U2F is considered pretty much baked by
the developer community, and there's already code written to it.  But these
concerns will be great for the W3C group and the successor API.  I've got a
similar list started related to crypto and future-proofing.


On Tue, Dec 1, 2015 at 8:29 PM, Jonas Sicking  wrote:

> Any chance that the API can be made a little more JS friendly? First
> thing that stands out is the use of success/error callbacks rather
> than the use of Promises.
>
> Also the use of numeric codes, rather than string values, is a pattern
> that the web has generally moved away from.
>
> / Jonas
>
> On Tue, Dec 1, 2015 at 5:23 PM, Richard Barnes 
> wrote:
> > The FIDO Alliance has been developing standards for hardware-based
> > authentication of users by websites [1].  Their work is getting
> significant
> > traction, so the Mozilla Foundation has decided to join the FIDO
> Alliance.
> > Work has begun in the W3C to create open standards using FIDO as a
> starting
> > point. We are proposing to implement the FIDO U2F API in Firefox in its
> > current form and then track the evolving W3C standard.
> >
> > Background: The FIDO Alliance has been developing a standard for
> > hardware-based user authentication known as “Universal Two-Factor” or U2F
> > [2].  This standard allows a website to verify that a user is in
> possession
> > of a specific device by having the device sign a challenge with a private
> > key that is held on the hardware device.  The browser’s role is mainly
> (1)
> > to route messages between the website and the token, and (2) to add the
> > origin of the website to the message signed by the token (so that the
> > signature is bound to the site).
> >
> > Several major websites now support U2F for authentication, including
> Google
> > [3], Dropbox [4], and Github [5].  Axel Nennker has filed a Bugzilla bug
> > for U2F support in Gecko [6].  The W3C has  begun the process of forming
> a
> > “WebAuthentication” working group that will work on a standard for
> enhanced
> > authentication using FIDO as a starting point [7].
> >
> > Proposed: To implement the high-level U2F API described in the FIDO JS
> API
> > specification, with support for the USB HID token interface.
> >
> > Please send comments on this proposal to the list no later than Monday,
> > December 14, 2015.
> >
> > -
> >
> > Personally, I have some reservations about implementing this, but I still
> > think it’s worth doing, given the clear need for something to augment
> > passwords.
> >
> > It’s unfortunate that the initial FIDO standards were developed in a
> closed
> > group, but there is good momentum building toward making FIDO more
> open.  I
> > have some specific concerns about the U2F API itself, but they’re
> > relatively minor.  For example, the whole system is highly vertically
> > integrated, so if we want to change any part of it (e.g., to use a curve
> > other than P-256 for signatures), we’ll need to build a whole new API.
> But
> > these are issues that can be addressed in the W3C process.
> >
> > We will continue to work on making standards for secure authentication
> more
> > open.  In the meantime, U2F is what’s here now, and there’s demonstrated
> > developer interest, so it makes sense for us to work on implementing it.
> >
> > Thanks,
> > --Richard
> >
> > [1] https://fidoalliance.org/
> > [2] https://fidoalliance.org/specifications/download/
> > [3] https://support.google.com/accounts/answer/6103523?hl=en
> > [4] https://blogs.dropbox.com/dropbox/2015/08/u2f-security-keys/
> > [5]
> >
> https://github.com/blog/2071-github-supports-universal-2nd-factor-authentication
> > [6] https://bugzilla.mozilla.org/show_bug.cgi?id=1065729
> > [7] http://w3c.github.io/websec/web-authentication-charter
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: FIDO U2F API

2015-12-02 Thread Richard Barnes
On Wed, Dec 2, 2015 at 12:25 AM,  wrote:

> On Tuesday, December 1, 2015 at 6:04:30 PM UTC-8, Jonas Sicking wrote:
> > Oh well. Bummer.
> >
> > / Jonas
>
> If it cheers you up any, the 2.0 API that replaces the U2F API uses
> promises - http://www.w3.org/Submission/2015/SUBM-fido-web-api-20151120/
>
> Richard, it would help if you could clarify - are you proposing that
> Firefox implement the 'old and deprecated' U2F API [1], or the 'fresh and
> new and hoping to be standards track' W3C member submission API [2].
>
> I originally wanted to reply with 'good news' that Chrome only shipped
> this for google.com, and only for HTTPS, and that we were committed to
> the W3C member submission as the path forward, but as I was working to back
> up a citation to this, I found out that we submarine-launched the API in
> Chrome 40 [3], for all HTTP and HTTPS origins, without an Intent to
> Implement / Intent to Ship.
>
> So, speaking solely on my behalf and not that of my employer, sorry that
> Chrome put Firefox in this position of "old and busted" and "new hotness",
> with "damned either way" as the result. I'm trying to find out more about
> this, as well as Chrome and Chromium's future commitments regarding this
> API.
>
> That said, knowing full well that the FIDO Alliance intends the W3C member
> submission to the path forward, could you provide greater clarity:
> 1) What it is you intend to implement?
>

My initial intent was to propose implementing [1], then implementing [2]
when it's ready.  After all, there's a lot in common, and as you say, the
W3C version will be much nicer.



> 2) If you intend to implement [1], whether or not you'll unship that
> if/as/when [2] progresses?
>

I think we would treat this just like we treat other early-stage things
that get shipped, gradually turning it off when the real thing shows up.

I don't remember what the current conventional wisdom about prefixing is,
but I would be open to shipping with a prefix if people thought that would
ease pain in the eventual transition.

--Richard



> [1]
> https://fidoalliance.org/specs/fido-u2f-v1.0-nfc-bt-amendment-20150514/fido-u2f-javascript-api.html
> [2] http://www.w3.org/Submission/2015/SUBM-fido-web-api-20151120/
> [3]
> https://chromium.googlesource.com/chromium/src/+/d60fcd7caafa7046da693fe2c3206ab5cf20%5E%21/#F9
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Richard Barnes
Hey Daniel,

Thanks for the heads-up.  This is a useful thing to keep in mind as we work
through the SHA-1 deprecation.

To be honest, this seems like a net positive to me, since it gives users a
clear incentive to uninstall this sort of software.

--Richard

On Mon, Jan 4, 2016 at 3:19 AM, Daniel Holbert  wrote:

> Heads-up, from a user-complaint/ support / "keep an eye out for this"
> perspective:
>  * Starting January 1st 2016 (a few days ago), Firefox rejects
> recently-issued SSL certs that use the (obsolete) SHA1 hash algorithm.[1]
>
>  * For users who unknowingly have a local SSL proxy on their machine
> from spyware/adware/antivirus (stuff like superfish), this may cause
> *all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
> autogenerated certificates.  (Every cert that gets sent to Firefox will
> use SHA1 and will have an issued date of "just now", which is after
> January 1 2016; hence, the cert is untrusted, even if the spyware put
> its root in our root store.)
>
>  * I'm not sure what action we should (or can) take about this, but for
> now we should be on the lookout for this, and perhaps consider writing a
> support article about it if we haven't already. (Not sure there's much
> help we can offer, since removing spyware correctly/completely can be
> tricky and varies on a case by case basis.)
>
> (Context: I received a family-friend-Firefox-support phone call today,
> who this had this exact problem.  Every HTTPS site was broken for her in
> Firefox, since January 1st.  IE worked as expected (that is, it happily
> accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
> remotely figure out what the piece of spyware was or how to remove it --
> but the rejected certs reported their issuer as being "Digital Marketing
> Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
> turn up anything useful, unfortunately; so I suspect this is "niche"
> spyware, or perhaps the name is dynamically generated.)
>
> Anyway -- I have a feeling this will be somewhat-widespread problem,
> among users who have spyware (and perhaps crufty "secure browsing"
> antivirus tools) installed.
>
> ~Daniel
>
> [1]
>
> https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Richard Barnes
On Mon, Jan 4, 2016 at 12:31 PM, Bobby Holley  wrote:

> On Mon, Jan 4, 2016 at 9:11 AM, Richard Barnes 
> wrote:
>
>> Hey Daniel,
>>
>> Thanks for the heads-up.  This is a useful thing to keep in mind as we
>> work
>> through the SHA-1 deprecation.
>>
>> To be honest, this seems like a net positive to me, since it gives users a
>> clear incentive to uninstall this sort of software.
>>
>
> By "this sort of software" do you mean "Firefox"? Because that's what 95%
> of our users experiencing this are going to do absent anything clever on
> our end.
>
> We clearly need to determine the scale of the problem to determine how
> much time it's worth investing into this. But I think we should assume that
> an affected user is a lost use in this case.
>

I was being a bit glib because I think in a lot of cases, it won't be just
Firefox that's affected -- all of the user's HTTPS will quit working,
across all browsers.

I agree that it would be good to get more data here.  I think Adam is on
the right track.

--Richard


>
> bholley
>
>
>
>>
>> --Richard
>>
>> On Mon, Jan 4, 2016 at 3:19 AM, Daniel Holbert 
>> wrote:
>>
>> > Heads-up, from a user-complaint/ support / "keep an eye out for this"
>> > perspective:
>> >  * Starting January 1st 2016 (a few days ago), Firefox rejects
>> > recently-issued SSL certs that use the (obsolete) SHA1 hash
>> algorithm.[1]
>> >
>> >  * For users who unknowingly have a local SSL proxy on their machine
>> > from spyware/adware/antivirus (stuff like superfish), this may cause
>> > *all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
>> > autogenerated certificates.  (Every cert that gets sent to Firefox will
>> > use SHA1 and will have an issued date of "just now", which is after
>> > January 1 2016; hence, the cert is untrusted, even if the spyware put
>> > its root in our root store.)
>> >
>> >  * I'm not sure what action we should (or can) take about this, but for
>> > now we should be on the lookout for this, and perhaps consider writing a
>> > support article about it if we haven't already. (Not sure there's much
>> > help we can offer, since removing spyware correctly/completely can be
>> > tricky and varies on a case by case basis.)
>> >
>> > (Context: I received a family-friend-Firefox-support phone call today,
>> > who this had this exact problem.  Every HTTPS site was broken for her in
>> > Firefox, since January 1st.  IE worked as expected (that is, it happily
>> > accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
>> > remotely figure out what the piece of spyware was or how to remove it --
>> > but the rejected certs reported their issuer as being "Digital Marketing
>> > Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
>> > turn up anything useful, unfortunately; so I suspect this is "niche"
>> > spyware, or perhaps the name is dynamically generated.)
>> >
>> > Anyway -- I have a feeling this will be somewhat-widespread problem,
>> > among users who have spyware (and perhaps crufty "secure browsing"
>> > antivirus tools) installed.
>> >
>> > ~Daniel
>> >
>> > [1]
>> >
>> >
>> https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Richard Barnes
First a bit of good news: The overall trend line for SHA-1 errors is not
spiking (yet).  Bin 6 of SSL_CERT_VERIFICATION_ERRORS corresponds to
ERROR_CERT_SIGNATURE_ALGORITHM_DISABLED, which is what you get when you
reject a bad SHA-1 cert.

https://ipv.sx/telemetry/general-v2.html?channels=beta%20release&measure=SSL_CERT_VERIFICATION_ERRORS&target=6

Now for the bad news: Telemetry is actually useless for the specific case
we're talking about here.  Telemetry is submitted over HTTPS (about:config
/ toolkit.telemetry.server), so measurements from affected clients will
never reach the server.

So we can't get any measurements unless we revert the SHA-1 intolerance.
Given this, I'm sort of inclined to do that, collect some data, then maybe
re-enable it in 45 or 46.  What do others think?

--Richard


On Mon, Jan 4, 2016 at 1:43 PM, Richard Barnes  wrote:

>
>
> On Mon, Jan 4, 2016 at 12:31 PM, Bobby Holley 
> wrote:
>
>> On Mon, Jan 4, 2016 at 9:11 AM, Richard Barnes 
>> wrote:
>>
>>> Hey Daniel,
>>>
>>> Thanks for the heads-up.  This is a useful thing to keep in mind as we
>>> work
>>> through the SHA-1 deprecation.
>>>
>>> To be honest, this seems like a net positive to me, since it gives users
>>> a
>>> clear incentive to uninstall this sort of software.
>>>
>>
>> By "this sort of software" do you mean "Firefox"? Because that's what 95%
>> of our users experiencing this are going to do absent anything clever on
>> our end.
>>
>> We clearly need to determine the scale of the problem to determine how
>> much time it's worth investing into this. But I think we should assume that
>> an affected user is a lost use in this case.
>>
>
> I was being a bit glib because I think in a lot of cases, it won't be just
> Firefox that's affected -- all of the user's HTTPS will quit working,
> across all browsers.
>
> I agree that it would be good to get more data here.  I think Adam is on
> the right track.
>
> --Richard
>
>
>>
>> bholley
>>
>>
>>
>>>
>>> --Richard
>>>
>>> On Mon, Jan 4, 2016 at 3:19 AM, Daniel Holbert 
>>> wrote:
>>>
>>> > Heads-up, from a user-complaint/ support / "keep an eye out for this"
>>> > perspective:
>>> >  * Starting January 1st 2016 (a few days ago), Firefox rejects
>>> > recently-issued SSL certs that use the (obsolete) SHA1 hash
>>> algorithm.[1]
>>> >
>>> >  * For users who unknowingly have a local SSL proxy on their machine
>>> > from spyware/adware/antivirus (stuff like superfish), this may cause
>>> > *all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
>>> > autogenerated certificates.  (Every cert that gets sent to Firefox will
>>> > use SHA1 and will have an issued date of "just now", which is after
>>> > January 1 2016; hence, the cert is untrusted, even if the spyware put
>>> > its root in our root store.)
>>> >
>>> >  * I'm not sure what action we should (or can) take about this, but for
>>> > now we should be on the lookout for this, and perhaps consider writing
>>> a
>>> > support article about it if we haven't already. (Not sure there's much
>>> > help we can offer, since removing spyware correctly/completely can be
>>> > tricky and varies on a case by case basis.)
>>> >
>>> > (Context: I received a family-friend-Firefox-support phone call today,
>>> > who this had this exact problem.  Every HTTPS site was broken for her
>>> in
>>> > Firefox, since January 1st.  IE worked as expected (that is, it happily
>>> > accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
>>> > remotely figure out what the piece of spyware was or how to remove it
>>> --
>>> > but the rejected certs reported their issuer as being "Digital
>>> Marketing
>>> > Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
>>> > turn up anything useful, unfortunately; so I suspect this is "niche"
>>> > spyware, or perhaps the name is dynamically generated.)
>>> >
>>> > Anyway -- I have a feeling this will be somewhat-widespread problem,
>>> > among users who have spyware (and perhaps crufty "secure browsing"
>>> > antivirus tools) installed.
>>> >
>>> > ~Daniel
>>> >
>>> > [1]
>>> >
>>> >
>>> https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
>>> > ___
>>> > dev-platform mailing list
>>> > dev-platform@lists.mozilla.org
>>> > https://lists.mozilla.org/listinfo/dev-platform
>>> >
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Richard Barnes
In any case, the pin check doesn't matter.  The certificate verification
will have failed well before the pin checks are done.

On Mon, Jan 4, 2016 at 4:14 PM, David Keeler  wrote:

> > { "aus5.mozilla.org", true, true, true, 7, &kPinset_mozilla },
>
> Just for clarification and future reference, the second "true" means this
> entry is in test mode, so it's not actually enforced by default.
>
> On Mon, Jan 4, 2016 at 1:08 PM, Dave Townsend 
> wrote:
>
> > aus5 (the server the app updater checks) is still pinned:
> >
> >
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/StaticHPKPins.h#739
> >
> > On Mon, Jan 4, 2016 at 12:54 PM, Robert Strong 
> > wrote:
> > > On Mon, Jan 4, 2016 at 12:46 PM, Jesper Kristensen <
> > > moznewsgro...@something.to.remove.jesperkristensen.dk> wrote:
> > >
> > >> Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:
> > >>
> > >>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
> > >>>
> >  Wouldn't the SSL cert failures also prevent submitting the telemetry
> >  payload to Mozilla's servers?
> > 
> > >>>
> > >>> Hmm... actually, I'll bet the cert errors will prevent Firefox
> updates,
> > >>> for that matter! (I'm assuming the update-check is performed over
> > HTTPS.)
> > >>>
> > >>
> > >> If I remember correctly, update checks are pinned to a specific CA, so
> > >> updates for users with software that MITM AUS would already be broken?
> > >
> > > That was removed awhile ago in favor of using mar signing as an exploit
> > > mitigation.
> > >
> > >
> > >
> > >>
> > >> ___
> > >> dev-platform mailing list
> > >> dev-platform@lists.mozilla.org
> > >> https://lists.mozilla.org/listinfo/dev-platform
> > >>
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web Authentication Working Group

2016-01-08 Thread Richard Barnes
You might note that the charter already says "Dependencies exist on
the Credential
Management API <https://w3c.github.io/webappsec-credential-management/>..."
:)

The credentials API defines a framework that allows for multiple types of
credential.  So I think the concept is that this WG is likely to define a
new credential type, as is done in the FIDO input:
http://www.w3.org/Submission/2015/SUBM-fido-web-api-20151120/

On Thu, Jan 7, 2016 at 3:52 PM, Jonas Sicking  wrote:

> What is the relationship between this WG and the spec draft at
>
> http://www.w3.org/TR/credential-management-1/
>
> Seems like there's potential for integration between the two?
>
> / Jonas
>
> On Thu, Jan 7, 2016 at 10:14 AM, Richard Barnes 
> wrote:
> > Obviously, given the earlier FIDO thread here, I think this is good work
> to
> > support.
> >
> > I think the charter is in pretty good shape.  The only comment I have is
> > that it talks about "attestations" without defining what is being
> attested.
> > This could be addressed with the following change:
> >
> > OLD:  "Attestation and signature formats defined for interoperability."
> > NEW: "Formats for signed data and verifiable attestations of a signer's
> > properties."
> >
> > On Wed, Jan 6, 2016 at 9:09 PM, L. David Baron 
> wrote:
> >
> >> The W3C is proposing a charter for:
> >>
> >>   Web Authentication Working Group
> >>   http://www.w3.org/2015/12/web-authentication-charter.html
> >>
> https://lists.w3.org/Archives/Public/public-new-work/2015Dec/0010.html
> >>
> >> Mozilla has the opportunity to send comments or objections through
> >> January 25.
> >>
> >> Please reply to this thread if you think there's something we should
> >> say as part of this charter review.
> >>
> >> (It seems likely that we want to support it given that Richard is
> >> involved, and one of the proposed co-chairs.)
> >>
> >> -David
> >>
> >> --
> >> 𝄞   L. David Baron http://dbaron.org/   𝄂
> >> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
> >>  Before I built a wall I'd ask to know
> >>  What I was walling in or walling out,
> >>  And to whom I was like to give offense.
> >>- Robert Frost, Mending Wall (1914)
> >>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Hardware Security Working Group

2016-03-01 Thread Richard Barnes
Mozilla should oppose the formation of this working group.  The charter
fails to specify concrete deliverables, and many of the potential
deliverables listed have been opposed several times by browser vendors,
e.g., because hardware assets exposed to JS can be used as super-cookies.

If anything is to be done here, it should be done in a community group or
other forum until they have a story for what exactly they will be
developing and how it fits with the web security model.

On Mon, Feb 29, 2016 at 8:34 PM, L. David Baron  wrote:

> The W3C is proposing a charter for:
>
>   Hardware Security Working Group
>   https://www.w3.org/2015/hasec/2015-hasec-charter.html
>   https://lists.w3.org/Archives/Public/public-new-work/2016Feb/0009.html
>
> Mozilla has the opportunity to send comments or objections through
> Friday, April 1.
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review.
>
> (My understanding is that there is some concern that this work could
> create supercookie-like features, which would be bad.)
>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: SharedArrayBuffer and Atomics will ride the trains behind a pref

2016-03-03 Thread Richard Barnes
Another good reason for blocking this for now is that it lets Javascript
circumvent the 5usec granularity of performance.now() and do things like
stealing private keys.

https://www.w3.org/TR/hr-time/#privacy-security
http://iss.oy.ne.ro/SpyInTheSandbox.pdf
https://bugzilla.mozilla.org/show_bug.cgi?id=1252035#c9

We must not turn this on by default in any branch other than Nightly until
we can assure that the 5usec boundary will be maintained.

--Richard


On Fri, Jan 15, 2016 at 12:10 AM, Lars Hansen  wrote:

> It's not enabled by default because the API is probably not fully baked
> yet; until the spec reaches Stage 3 at TC39 we should expect things to be
> fluid.  I don't expect that milestone to be reached until this summer.
>
> We've discussed enabling by default on Aurora, DevEd, and Beta once we
> reach Stage 2 at TC39, but I don't own that decision, can't guarantee it,
> and might even argue that it would be better to wait a couple of months
> after reaching Stage 2, which is when the spec gets serious attention from
> the committee.
>
> Google has what I understand to be a compatible implementation of the
> current spec, also available behind a pref (actually behind two of them
> last I heard).
>
> --lars
>
>
> On Thu, Jan 14, 2016 at 10:24 PM, Robert O'Callahan 
> wrote:
>
> > Sounds good to me too. What's blocking us from enabling by default?
> >
> > Rob
> > --
> > lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe
> uresyf
> > toD
> > selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
> > rdsme,aoreseoouoto
> > o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
> > lurpr
> > .a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
> > esn
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: W3C WebAppSec credentialmanagement API

2016-03-11 Thread richard . barnes
On Thursday, March 10, 2016 at 11:27:34 PM UTC-5, Martin Thomson wrote:
> On Fri, Mar 11, 2016 at 5:56 AM, Axel Nennker  wrote:
> > no password generation help by the UA
> 
> I agree with MattN here, not doing this eliminates much of the
> advantage of having a password manager.  Or do you have a plan to rely
> on sites doing that with CredentialContainer.store()?  That doesn't
> sound optimal to me.

I think the idea would be something like:

```
var pass = /* generate a long random password */;
var cred = new PasswordCredential({password: pass});
navigator.credentials.store(cred);
```

So having the API as an imperative interface to the password manager doesn't do 
the work for you, but (ISTM) makes it more appealing to do so, since you have 
more assurance that the user is never going to have to see it.

That does raise the question, however, of how such a credential differs from, 
say:

* A cookie
* A random nonce in localStorage/IDB
* A non-extractable WebCrypto key

By which I mean that if a website wants to verify that it is loaded in the same 
browser as before, it already has a variety of ways to do so, some of which 
offer better anti-theft properties than these Credential objects.  Presumably 
the fact that these are not being used means that the site wants some 
indication that it has the right *user*, not just the right browser.  In which 
case, generating a long random password is not so useful.

--Richard
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies

2016-04-18 Thread Richard Barnes
On Fri, Apr 15, 2016 at 5:45 PM, Matthew N.  wrote:

> On 2016-04-15 7:47 AM, Tantek Çelik wrote:
>
>> What steps can we take in this direction WITHOUT breaking web compat?
>>
>>
>> E.g. since one of the issues raised is that *every* time a user
>> enters/submits a password over HTTP (not secure), it opens them to
>> being sniffed etc., thus it's good to discourage the frequency.
>>
>> Some STRAW PROPOSALS that I expect others here (and UX folks) to
>> easily improve on:
>>
>> 1. Warning (perhaps similar to the invalid red glow) on password
>> inputs in forms with HTTP "action"
>>
>
> We are making progress towards this and Aislinn Grigas from UX worked on a
> design for something like this:
> https://bugzilla.mozilla.org/attachment.cgi?id=8678150
>
> We already started developer-specific warnings in the web console and in
> the address bar of Nightly + Developer Edition:
> https://hacks.mozilla.org/2016/01/login-forms-over-https-please/
>
> There are some dependencies to fix before doing user-facing warnings which
> we're currently working on. You can follow along in the bug:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1217162
>
> 2. Warning (similarly) on HTTP-auth password dialogs
>>
>
> This is https://bugzilla.mozilla.org/show_bug.cgi?id=1185145 which I
> haven't seen a design for yet but should be less risky to implement than
> for . It is in the Firefox privacy/security team backlog.
>

Could we just disable HTTP auth for connections not protected with TLS?  At
least Basic auth is manifestly insecure over an insecure transport.  I
don't have any usage statistics, but I suspect it's pretty low compared to
form-based auth.

--Richard


> Meta bug related to dealing with insecure login forms:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1217142
>
> Thanks,
> Matthew N.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: Restrict geolocation.watchPosition to secure contexts

2016-04-21 Thread Richard Barnes
This is clearly a powerful feature, so it's a natural candidate for
restriction.  Chromium is restricting all of navigator.geolocation as of 50:

https://codereview.chromium.org/1530403002/

Our telemetry shows that only ~0.1% of the usage of watchPosition() is in
non-secure contexts.

http://mzl.la/1VEBbZq

That's low enough that we should go ahead and turn it off.

I have filed Bug 1266494 to track this issue.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to (sort of) unship SSLKEYLOGFILE logging

2016-04-26 Thread Richard Barnes
Keeping it in Nightly / Developer Edition seems like about the right
compromise to me.  I guess there's some marginal security in turning off
this capability in release browsers (though I have difficulty precisely
articulating the threat model where it makes sense).  But if we're going to
disable it at all, we should keep it around for developer-focused builds.

--Richard

On Tue, Apr 26, 2016 at 5:44 AM, Martin Thomson  wrote:

> On Tue, Apr 26, 2016 at 6:08 PM, Jonas Sicking  wrote:
> > Limiting this to aurora builds might make the most sense here since
> > that's what we're pushing as the build that developers should use.
>
> I'm OK with that; that's why I asked here.
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1188657
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to restrict to secure contexts: navigator.geolocation

2016-10-21 Thread Richard Barnes
The geolocation API allows web pages to request the user's geolocation,
drawing from things like GPS on mobile, and doing WiFi / IP based
geolocation on desktop.

Due to the privacy risks associated with this functionality, I would like
to propose that we restrict this functionality to secure contexts [1].

Our telemetry for geolocation is a little rough, but we can derive some
upper bounds.  According to telemetry from Firefox 49, the geolocation
permissions prompt has been shown around 4.6M times [2], on about 3B page
loads [3].  Around 21% of these requests were (1) from "http:" origins, and
(2) granted by the user.  So the average rate of permissions being granted
to non-secure origins per pageload is 4.6M * 21% / 3B = 0.0319%.

Access to geolocation from non-secure contexts is already disabled in
Chrome [4] and WebKit [5].

Please send any comments on this proposal by Friday, October 28.

Relevant bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1072859

[1] https://www.w3.org/TR/secure-contexts/
[2] https://mzl.la/2eeoWm9
[3] https://mzl.la/2eoiIAw
[4] https://codereview.chromium.org/1530403002/
[5] https://trac.webkit.org/changeset/200686
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to restrict to secure contexts: navigator.geolocation

2016-10-22 Thread Richard Barnes
On Fri, Oct 21, 2016 at 8:59 PM, Chris Peterson 
wrote:

> On 10/21/2016 3:11 PM, Tantek Çelik wrote:
>
>> > Does this mean that we'd be breaking one in 5 geolocation requests as a
>>> > result of this?  That seems super high.  :(
>>>
>> Agreed. For example, my understanding is that this will break
>> http://www.nextbus.com/ (and thus http://www.nextmuni.com/ ) location
>> awareness (useful for us SF folks), which is kind of essential for
>> having it tell you transit stops near you. -t
>>
>
> Indeed, the geolocation feature on nextbus.com is broken in Chrome. (The
> site shows a geolocation error message on first use.)
>
> Next Bus already has an HTTPS version of their site, but it is not the
> default and has some mixed-content warnings. For a site that uses
> geolocation as a core part of its service, I'm surprised they have let it
> stay broken in Chrome for six months. Chrome removed insecure geolocation
> in April 2016 and announced its deprecation in November 2015.


This is actually the bigger point than the telemetry point: The sites we
would break with this change have already been broken for six months in
Chrome and for four months in WebKit.  This is not something where we
should be standing on principle and bravely being different from other
browsers; in fact quite the opposite.

--Richard
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to restrict to secure contexts: navigator.geolocation

2016-10-22 Thread Richard Barnes
On Fri, Oct 21, 2016 at 5:56 PM, Ehsan Akhgari 
wrote:

> On 2016-10-21 3:49 PM, Richard Barnes wrote:
> > The geolocation API allows web pages to request the user's geolocation,
> > drawing from things like GPS on mobile, and doing WiFi / IP based
> > geolocation on desktop.
> >
> > Due to the privacy risks associated with this functionality, I would like
> > to propose that we restrict this functionality to secure contexts [1].
> >
> > Our telemetry for geolocation is a little rough, but we can derive some
> > upper bounds.  According to telemetry from Firefox 49, the geolocation
> > permissions prompt has been shown around 4.6M times [2], on about 3B page
> > loads [3].  Around 21% of these requests were (1) from "http:" origins,
> and
> > (2) granted by the user.  So the average rate of permissions being
> granted
> > to non-secure origins per pageload is 4.6M * 21% / 3B = 0.0319%.
>
> Does this mean that we'd be breaking one in 5 geolocation requests as a
> result of this?  That seems super high.  :(
>

That's why I included the additional context.  Any feature we disable is
going to break 100% of pageloads that use that feature.  You need to take
into account how many pageloads that actually is.



> Since the proposal in the bug is adding [SecureContext] to
> Navigator.geolocation, have we also collected telemetry around which
> properties and methods are accessed?  Since another kind of breakage we
> may encounter is code like |navigator.geolocation.getCurrentPosition()|
> throwing an exception and breaking other parts of site scripts...
>

I'm not picky about how exactly we turn this off, as long as the
functionality goes away.  Chrome and Safari both immediately call the error
handler with the same error as if the user had denied permission.  We could
do that too, it would just be a little more code.

--Richard


>
> > Access to geolocation from non-secure contexts is already disabled in
> > Chrome [4] and WebKit [5].
> >
> > Please send any comments on this proposal by Friday, October 28.
> >
> > Relevant bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1072859
> >
> > [1] https://www.w3.org/TR/secure-contexts/
> > [2] https://mzl.la/2eeoWm9
> > [3] https://mzl.la/2eoiIAw
> > [4] https://codereview.chromium.org/1530403002/
> > [5] https://trac.webkit.org/changeset/200686
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to restrict to secure contexts: navigator.geolocation

2016-10-24 Thread Richard Barnes
On Mon, Oct 24, 2016 at 6:29 PM, Adam Roach  wrote:

> I'm hearing general agreement that we think turning this off is the right
> thing to do; that maintaining compatibility with Chrome's behavior is
> important (since that's what existing code will presumably be tested
> against); and -- as bz points out -- we don't want to throw an exception
> here for spec compliance purposes. I propose that we move forward with a
> plan to immediately deny permission in non-secure contexts. Kan-Ru's
> proposal that we put this behind a pref seems like a good one -- that way,
> if we discover that something unexpected happens in deployment, it's a very
> simple fix to go back to our current behavior.
>

This plan sounds fine to me.  Thanks for summarizing, Adam.



> I would be hesitant to over-analyze additional complications, such as
> https-everywhere or user education on this topic. We are, after all, simply
> coming into alignment with the rest of the web ecosystem here.
>

+1

--Richard



>
> /a
>
>
> On 10/22/16 12:05, Ehsan Akhgari wrote:
>
>> On 2016-10-22 10:16 AM, Boris Zbarsky wrote:
>>
>>> On 10/22/16 9:38 AM, Richard Barnes wrote:
>>>
>>>> I'm not picky about how exactly we turn this off, as long as the
>>>> functionality goes away.  Chrome and Safari both immediately call the
>>>> error
>>>> handler with the same error as if the user had denied permission.  We
>>>> could
>>>> do that too, it would just be a little more code.
>>>>
>>> Uh...  What does the spec say to do?
>>>
>> It seems like the geolocation spec just says the failure callback needs
>> to be called when permission is defined, with the PERMISSION_DENIED
>> code, but doesn't mention anything about non-secure contexts.  The
>> permissions spec explicitly says that geolocation *is* allowed in
>> non-secure contexts <https://w3c.github.io/permissions/#geolocation>.
>> The most relevant thing I can find is
>> <https://w3c.github.io/webappsec-secure-contexts/#legacy-example>, which
>> is an implementation consideration.  But as far as I can tell, this is
>> not spec'ed.
>>
>> Your intent, and the whole "sites that would break are already broken"
>>> thing sounded like we were going to match Chrome and Safari behavior; if
>>> that was not the plan you really needed to explicitly say so!
>>>
>> Yes, indeed.  It seems that making Navigator.geolocation [SecureContext]
>> is incompatible with their implementation.
>>
>> We certainly should not be shipping anything that will change behavior
>>> here to something _different_ from what Chrome and Safari are shipping,
>>> assuming they are shipping compatible things.  Again, what does the spec
>>> say to do?
>>>
>>> -Boris
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
> --
> Adam Roach
> Principal Engineer, Mozilla
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship navigator.sendBeacon

2014-04-16 Thread Richard Barnes

Primary eng emails

rbar...@mozilla.com, eh...@mozilla.com

Spec
http://www.w3.org/TR/beacon/


*Summary*

Allows pages to send a "beacon" HTTP request.  Beacons are allowed a 
limited subset of HTTP (only a few content types), and the JS cannot 
receive the content of the response.  However, beacon requests will 
survive after the page is unloaded, removing the need for synchronous 
XHRs in onunload handlers.


The specification is currently under development in W3C, but has been 
substantially stable for a while.

http://www.w3.org/TR/beacon/
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/Beacon/Overview.html

It landed behind a runtime flag in mozilla:
https://bugzilla.mozilla.org/show_bug.cgi?id=936340

The flag is: beacon.enabled

Nightly is turned on by default.
https://bugzilla.mozilla.org/show_bug.cgi?id=990220


*WebKit:*

We are not aware of implementation effort in WebKit.


*Blink:*

The Chromium team has announced their intent to implement and opened a bug.

https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/Vdi7F7Mk_rM/L5D9HjNmt8YJ
https://code.google.com/p/chromium/issues/detail?id=360603



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship navigator.sendBeacon

2014-04-16 Thread Richard Barnes

On Apr 16, 2014, at 10:52 AM, Benjamin Smedberg  wrote:

> On 4/16/2014 9:30 AM, Richard Barnes wrote:
>> 
>> Allows pages to send a "beacon" HTTP request.  Beacons are allowed a limited 
>> subset of HTTP (only a few content types), and the JS cannot receive the 
>> content of the response.  However, beacon requests will survive after the 
>> page is unloaded, removing the need for synchronous XHRs in onunload 
>> handlers.
> 
> Are beacons primarily meant as tracking devices, or is it also meant as a way 
> to persist unsaved page state when the user navigates?
> 
> I can't imagine that there is any reasonable way to expose UI prefs 
> specifically about beacons, but should we disable beacons by default if the 
> user has do-not-track enabled? Or will we leave a hidden pref so that 
> privacy-sensitive extensions could disable beacon functionality if they 
> wished?
> 
> --BDS
> 

I think the answer is that beacons are good for whenever you want to push state 
to the server and don't care about getting results back.  So both tracking (as 
the name implies) and persisting state.  I expect tracking to be the initial 
use case, but I wouldn't be surprised to see it get picked up for other cases 
within the category above, since the API is a lot simpler than XHR.  

As far as preferences, the bug to enable it by default leaves the preference 
there, so it could be disabled by, say, an add-on.

--Richard

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship navigator.sendBeacon

2014-04-16 Thread Richard Barnes

On Apr 16, 2014, at 1:49 PM, Gavin Sharp  wrote:

> On Wed, Apr 16, 2014 at 8:56 AM, Ehsan Akhgari  
> wrote:
>>> Are beacons primarily meant as tracking devices, or is it also meant as
>>> a way to persist unsaved page state when the user navigates?
> 
>> Beacons do not enable any new ways of tracking which is not already
>> possible.
> 
> That's not an answer to bsmedberg's question. The question is whether
> there are expected to be enough non-"tracking" use cases for
> sendBeacon in practice, such that it would become problematic to
> conflate "disable sendBeacon" with "disable tracking".

I don't know about "problematic", but ISTM that it might be useless.  If people 
disable sendBeacon in an effort to avoid tracking, then the trackers can always 
just test and polyfill with XHR.  If you really want "disable tracking", you're 
going to have to do a lot more, and probably break a lot of the web.

--Richard


> 
> Gavin



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


WebCrypto for http:// origins

2014-09-11 Thread Richard Barnes
Hey all,

Sorry for being late to the party here.  I now subscribe to dev.platform :)

On the issue of whether WebCrypto should be restricted to secure origins: In 
discussions I've had with folks around Mozilla, we have not seen sufficient 
security risks to motivate cutting off the potential benefits of exposing 
crypto utilities to non-secure origins.

As a top-level point, we are in total agreement with the Chrome team that we 
need more encryption in the web.  We should be taking advantage of more 
opportunities to add cryptographic protections to web applications.  So our 
default position is to provide web developers more tools for doing 
cryptography, when they can provide even incremental benefit.

Most notably, even over non-secure origins, application-layer encryption can 
provide resistance to passive adversaries.  Given that a bunch of the pervasive 
monitoring threats we've been talking about are purely passive, that's a 
non-trivial win.  If the encryption keys are made non-extractable, you're even 
protected against active attackers stealing them later (as long as the first 
load is clean).  And that's not to mention that there are entirely 
non-security-sensitive use cases for faster hashing using crypto.subtle.digest()

No, WebCrypto on an http:// origin is not a replacement for TLS.  Yes, you can 
still be subverted by an active attacker.  The bath-water is dirty, but there's 
still a baby in it.

--Richard

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebCrypto for http:// origins

2014-09-11 Thread Richard Barnes

On Sep 11, 2014, at 9:08 AM, Anne van Kesteren  wrote:

> On Thu, Sep 11, 2014 at 5:56 PM, Richard Barnes  wrote:
>> Most notably, even over non-secure origins, application-layer encryption can 
>> provide resistance to passive adversaries.
> 
> See https://twitter.com/sleevi_/status/509723775349182464 for a long
> thread on Google's security people not being particularly convinced by
> that line of reasoning.

Reasonable people often disagree in their cost/benefit evaluations.

As Adam explains much more eloquently, the Google security team has had an 
"all-or-nothing" attitude on security in several contexts.  For example, in the 
context of HTTP/2, Mozilla and others have been working to make it possible to 
send http-schemed requests over TLS, because we think it will result in more of 
the web getting some protection.  Google have been less sanguine about this 
idea, because they worry that some sites will opt for a lower security level 
instead of full-on HTTPS.

So allowing WebCrypto for non-secure origins is consistent with the "something 
is better than nothing" approach we've taken in other places, and Chrome's 
prohibition is consistent with their approach.  As Adam points out, in the 
post-Snowden world, there are a lot more people who are willing to accept lots 
of things getting OK protection, vs. fewer things getting high-grade protection.

--Richard
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Richard Barnes

On Sep 15, 2014, at 5:11 AM, Henri Sivonen  wrote:

> On Mon, Sep 15, 2014 at 11:24 AM, Daniel Stenberg  wrote:
>> On Mon, 15 Sep 2014, Henri Sivonen wrote:
>>> What the Chrome folks suggest for HTTP/2 would give rise to a situation
>>> where your alternatives are still one one hand unencrypted and
>>> unauthenticated and on the other hand encrypted and authenticated *but* the
>>> latter is *faster*.
>> 
>>> You mess up that reversal of the speed argument if you let unauthenticated
>>> be as fast as authenticated.
>> 
>> In my view that is a very depressing argument. That's favouring *not*
>> improving something just to make sure the other option runs faster in
>> comparision. Shouldn't we strive to make the user experience better for all
>> users, even those accessing HTTP sites?
> 
> I think the primary way for making the experience better for users
> currently accessing http sites should be getting the sites to switch
> to https so that subsequently people accessing those sites would be
> accessing https sites. That way, the user experience not only benefits
> from HTTP/2 performance but also from the absence of ISP-injected ads
> or other MITMing.

"Just turn on HTTPS" is not as trivial as you seem to think.  For example, 
mixed content blocking means that you can't upgrade until all of your external 
dependencies have too.

--Richard



>> In a world with millions and billions of printers, fridges, TVs, settop
>> boxes, elevators, nannycams or whatever all using embedded web servers - the
>> amount of certificate handling for all those devices to run and use fully
>> authenticated HTTPS is enough to prevent a large amount of those to just not
>> go there.
> 
> It seems like a very bad idea not to have authenticated security for
> devices that provide access to privacy-sensitive data (nannycams,
> fridges, DVRs) or that allow intruders to effect unwanted
> physical-world behaviors (printers, elevators).
> 
> For devices like this that are exposed to the public network, I think
> it would be worthwhile to make it feasible for dynamic DNS providers
> to run a publicly trusted sub-CA that's constrained to issuing certs
> only to host under their domain (i.e. not allowed to sign all names on
> the net).
> 
> For devices that aren't exposed to the public network, maybe we should
> make the TOFU interstitial for self-signed certs different for RFC1918
> IP addresses or at least 192.168.*.*. (Explain that if you are on your
> home network and accessing an appliance for the first time, it's OK
> and expected to create and exception to pin that particular public key
> for that IP address. However, if you are on a hotel or coffee shop
> network, don't.)
> 
> -- 
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-21 Thread Richard Barnes
Pretty sure that what he's referring to is called DANE.  It lets a domain 
holder assert a certificate or key pair, using DNSSEC to bind it to the domain 
instead of PKIX (or in addition to PKIX).

https://tools.ietf.org/html/rfc6698



On Sep 21, 2014, at 8:01 AM, Anne van Kesteren  wrote:

> On Sun, Sep 21, 2014 at 1:14 PM, Aryeh Gregor  wrote:
>> What happened to serving certs over DNSSEC?  If browsers supported
>> that well, it seems it has enough deployment on TLDs and registrars to
>> be usable to a large fraction of sites.
> 
> DNSSEC does not help with authentication of domains and establishing a
> secure communication channel as far as I know. Is there a particular
> proposal you are referring to?
> 
> 
> -- 
> https://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-26 Thread Richard Barnes
Speaking as someone who (1) chaired the IETF working group on geolocation and 
privacy for several years, and (2) now manages PKI and crypto for Mozilla -- 
this is nonsense as stated.  It is not our job to break the HTTP-schemed web to 
force everyone to HTTPS.

Users and web sites have been using geolocation on unauthenticated origins for 
several years now without major implications.  The most common uses involve 
one-shot access to location for things like content customization.  It's no 
more dangerous than me typing my address into a form.

I could agree with Henri's suggestion in the bug that we should limit 
persistent permissions to authentication origins, as we do with gUM.  But in 
neither case have I heard any coherent rationale for disabling the features 
entirely, beyond "Nobody should use HTTP anymore", which is clearly a 
non-starter.

--Richard 


On Sep 26, 2014, at 3:58 PM, Anne van Kesteren  wrote:

> Exposing geolocation on unauthenticated origins was a mistake. Copying
> that for getUserMedia() is too. I suggest that to protect our users we
> make some noise about deprecating this practice. And that in that
> message we convey we plan to disable both on unauthenticated origins
> once 2015 is over.
> 
> More immediately we should make it impossible to make persistent
> grants for these features on unauthenticated origins.
> 
> I can reach out to Google (and Apple & Microsoft I suppose, though I
> haven't seen much from them on the pro-TLS front) to see if they would
> be on board with this and help us spread the message.
> 
> I filed
> 
>  https://bugzilla.mozilla.org/show_bug.cgi?id=1072859
> 
> for geolocation.
> 
> 
> -- 
> https://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-27 Thread Richard Barnes

On Sep 27, 2014, at 3:02 AM, Anne van Kesteren  wrote:

> On Fri, Sep 26, 2014 at 11:06 PM, Richard Barnes  wrote:
>> It is not our job to break the HTTP-schemed web to force everyone to HTTPS.
> 
> It is for features where it matters for end users.
> 
> 
>> Users and web sites have been using geolocation on unauthenticated origins 
>> for several years now without major implications.
> 
> Citation needed.
> 
> 
>> It's no more dangerous than me typing my address into a form.
> 
> Those forms should also be behind TLS. But obviously forcing that is
> far less practical than taking steps with geolocation.

Are you making an argument more subtle than "everything should be HTTPS, so we 
should make HTTP less functional"?  

I don't disagree that things should be HTTPS, but breaking the HTTP-schemed web 
is not the way to get there.  That's like Verizon trying to get people to use 
their favorite video streaming site by slowing down all the others.

Since the major barrier to HTTPS deployment is the difficulty of deploying and 
maintaining it, the better strategy is to make HTTPS simpler to deploy.  Some 
of the hosting providers have been making good progress on this front.

--Richard


> 
> 
> -- 
> https://annevankesteren.nl/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-28 Thread Richard Barnes

On Sep 28, 2014, at 6:26 AM, Anne van Kesteren  wrote:

> On Sat, Sep 27, 2014 at 10:10 PM, Richard Barnes  wrote:
>> Are you making an argument more subtle than "everything should be HTTPS, so 
>> we should make HTTP less functional"?
> 
> I'm not sure where you see me making that argument in this thread. I
> simply recommended we move to require TLS for privacy-sensitive APIs.
> And it still seems within the realm of possibility to do that for
> geolocation and getUserMedia().

You say "privacy-sensitive API" as if it were a boolean variable.  But there's 
a whole continuum of risk here.

Arguably any API that can collect information from the user or transmit it on 
the wire is privacy-sensitive.  Should onmouseover and XHR be limited to secure 
origins?  After all, you can collect biometrics from keyboard and mouse events, 
and use them to identify users with fairly high fidelity [1].  If you continue 
down this line of thinking, you end up saying that HTTP-schemed sites can't 
have any meaningful user interaction.  This is what I mean by "breaking the 
HTTP web".  

Yes, we have to draw lines.  The service workers and gUM examples show a good 
example of a line that can be drawn: Things that allow persistent, long-lived 
capabilities should only be granted for secure origins.  There, especially in 
the case of service workers, you end up with a "bell cannot be un-rung" 
scenario.  That could make sense for geolocation, in the sense that Henri 
raised.

I haven't heard a similar argument for restricting geolocation altogether.


>> I don't disagree that things should be HTTPS, but breaking the HTTP-schemed 
>> web is not the way to get there.  That's like Verizon trying to get people 
>> to use their favorite video streaming site by slowing down all the others.
> 
> No, it's not at all like that. It's about tightening the requirements
> for getting access to privacy-sensitive user data. All this is saying
> is that if you want this user data, you need to deploy TLS on your
> site. And that can be done for free although it is unfortunately still
> a bit of a hassle. But guides have been created, communities have
> collected pointers, and gradually it will spread through conferences
> how to get this done.

It's pretty glib to call something "a bit of a hassle" that can trip up even 
such well-resourced organizations as Apple [2].

--Richard

[1] <http://www.darpa.mil/OpenCatalog/AA.html>
[2] <http://www.macrumors.com/2014/05/25/apple-software-update-invalid/>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-10-02 Thread Richard Barnes

On Sep 30, 2014, at 5:36 PM, Ehsan Akhgari  wrote:

> On 2014-09-30, 4:29 AM, Henri Sivonen wrote:
>>> More immediately we should make it impossible to make persistent
>>> grants for these features on unauthenticated origins.
>> 
>> This I agree with when it comes to privacy-sensitive API: Granting a
>> persistent permission to an http: origin amounts to granting a
>> persistent permission to everyone who in the future has a chance of
>> performing an active MITM attack on you.
> 
> I also think that we should definitely stop persisting the geolocation 
> permission grant for non-authenticated origins.  I'm not really sure if web 
> compat allows us to remove support for the API completely (although 
> admittedly I don't have data on this.)

Either way, we should collect some data before we take action.


> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: USB security keys

2014-10-21 Thread Richard Barnes

> On Oct 21, 2014, at 4:08 PM, Robert O'Callahan  wrote:
> 
> http://googleonlinesecurity.blogspot.co.nz/2014/10/strengthening-2-step-verification-with.html
> We should support this.

Maybe I'm just jaded, but given that we're currently in the process of phasing 
out custom APIs for one specialized hardware platform, I'm not super 
enthusiastic about adding support for another one.

There's a conversation going on between some folks in the platform security and 
FxOS security teams working on an overall strategy for secure hardware, so that 
we don't have to cut fresh code every time someone comes up with a new identity 
scheme.  We will hopefully have something baked enough to share around soon.

Note that that blog post glosses over a couple of important details of the 
Chrome implementation.  First, it's non-native; it's a bundled extension, like 
Flash.  And second, it's only enabled for google.com, so it's not really a 
web-facing feature.

--Richard



> Rob
> -- 
> oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
> owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
> osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
> owohooo
> osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
> oioso
> oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
> owohooo
> osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
> ooofo
> otohoeo ofoioroeo ooofo ohoeololo.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-12 Thread Richard Barnes

> On Nov 12, 2014, at 4:35 AM, Anne van Kesteren  wrote:
> 
> On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach  wrote:
>> The whole line of argumentation that web browsers and servers should be
>> taking advantage of opportunistic encryption is explicitly informed by
>> what's actually "happening elsewhere." Because what's *actually* happening
>> is an overly-broad dragnet of personal information by a wide variety of both
>> private and governmental agencies -- activities that would be prohibitively
>> expensive in the face of opportunistic encryption.
> 
> ISPs are doing it already it turns out. Governments getting to ISPs
> has already happened. I think continuing to support opportunistic
> encryption in Firefox and the IETF is harmful to our mission.

You're missing Adam's point.  From the attacker's perspective, opportunistic 
sessions are indistinguishable from 


>> Google's laser focus on preventing active attackers to the exclusion of any
>> solution that thwarts passive attacks is a prime example of insisting on a
>> perfect solution, resulting instead in substantial deployments of nothing.
>> They're naïvely hoping that finding just the right carrot will somehow
>> result in mass adoption of an approach  that people have demonstrated, with
>> fourteen years of experience, significant reluctance to deploy universally.
> 
> Where are you getting your data from?
> 
> https://plus.google.com/+IlyaGrigorik/posts/7VSuQ66qA3C shows a very
> different view of what's happening.

Be careful how you count.  Ilya's stats are equivalent to the Firefox 
HTTP_TRANSACTION_IS_SSL metric [1], which counts things like search box 
background queries; in particular, it greatly over-samples Google.

A more realistic number is HTTP_PAGELOAD_IS_SSL, for which HTTPS adoption is 
still around 30%.  That's consistent with other measures of how many sites out 
there support HTTPS.

--Richard

[1] 
http://telemetry.mozilla.org/#filter=release%2F32%2FHTTP_TRANSACTION_IS_SSL&aggregates=multiselect-all!Submissions&evoOver=Builds&locked=true&sanitize=true&renderhistogram=Graph

[2] 
http://telemetry.mozilla.org/#filter=release%2F32%2FHTTP_PAGELOAD_IS_SSL&aggregates=multiselect-all!Submissions&evoOver=Builds&locked=true&sanitize=true&renderhistogram=Graph



> 
> 
> -- 
> https://annevankesteren.nl/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to deprecate: Insecure HTTP

2015-04-13 Thread Richard Barnes
There's pretty broad agreement that HTTPS is the way forward for the web.
In recent months, there have been statements from IETF [1], IAB [2], W3C
[3], and even the US Government [4] calling for universal use of
encryption, which in the case of the web means HTTPS.

In order to encourage web developers to move from HTTP to HTTPS, I would
like to propose establishing a deprecation plan for HTTP without security.
Broadly speaking, this plan would entail  limiting new features to secure
contexts, followed by gradually removing legacy features from insecure
contexts.  Having an overall program for HTTP deprecation makes a clear
statement to the web community that the time for plaintext is over -- it
tells the world that the new web uses HTTPS, so if you want to use new
things, you need to provide security.  Martin Thomson and I drafted a
one-page outline of the plan with a few more considerations here:

https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

Some earlier threads on this list [5] and elsewhere [6] have discussed
deprecating insecure HTTP for "powerful features".  We think it would be a
simpler and clearer statement to avoid the discussion of which features are
"powerful" and focus on moving all features to HTTPS, powerful or not.

The goal of this thread is to determine whether there is support in the
Mozilla community for a plan of this general form.  Developing a precise
plan will require coordination with the broader web community (other
browsers, web sites, etc.), and will probably happen in the W3C.

Thanks,
--Richard

[1] https://tools.ietf.org/html/rfc7258
[2]
https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/
[3] https://w3ctag.github.io/web-https/
[4] https://https.cio.gov/
[5]
https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion
[6]
https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Richard Barnes
On Mon, Apr 13, 2015 at 3:00 PM, Frederik Braun  wrote:

> On 13.04.2015 20:52, david.a.p.ll...@gmail.com wrote:
> >
> >> 2) Protected by subresource integrity from a secure host
> >>
> >> This would allow website operators to securely serve static assets from
> non-HTTPS servers without MITM risk, and without breaking transparent
> caching proxies.
> >
> > Is that a complicated word for SHA512 HASH? :)  You could envisage a new
> http URL pattern http://video.vp9?
>
> I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ -
>
> But, note that this will not give you extra security UI (or less
> warnings): Browsers will still disable scripts served over HTTP on an
> HTTPS page - even if the integrity matches.
>
> This is because HTTPS promises integrity, authenticity and
> confidentiality. SRI only provides the former.
>

I agree that we should probably not allow insecure HTTP resource to be
looped in through SRI.

There are several issues with this idea, but the one that sticks out for me
is the risk of leakage from HTTPS through these http-schemed resource
loads.  For example, that fact that you're loading certain images might
reveal which Wikipedia page you're reading.

--Richard


> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 5:11 PM,  wrote:

> One limiting factor is that Firefox doesn't treat form data the same on
> HTTPS sites.
>
> Examples:
>
>
> http://stackoverflow.com/questions/14420624/how-to-keep-changed-form-content-when-leaving-and-going-back-to-https-page-wor
>
>
> http://stackoverflow.com/questions/10511581/why-are-html-forms-sometimes-cleared-when-clicking-on-the-browser-back-button
>
> After loosing a few forum posts or wiki edits to this bug in Firefox, you
> quickly insist on using unsecured HTTP as often as possible.
>

Interesting observation.  ISTM that that's a bug in HTTPS.  At least I
don't see an obvious security reason for the behavior to be that way.

More generally: I expect that this process will turn up bugs in HTTPS
behavior, either "actual" bugs in terms of implementation errors, or
"logical" bugs where the intended behavior does not meet the expectations
or needs of websites.  So we should be open to adapting our HTTPS behavior
some (within the bounds of the security requirements) in order to
facilitate this transition.

--Richard



> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 7:03 PM, Martin Thomson  wrote:

> On Mon, Apr 13, 2015 at 3:53 PM, Eugene 
> wrote:
> > In addition to APIs, I'd like to propose prohibiting caching any
> resources loaded over insecure HTTP, regardless of Cache-Control header, in
> Phase 2.N.
>
> This has some negative consequences (if only for performance).  I'd
> like to see changes like this properly coordinated.  I'd rather just
> treat "caching" as one of the features for Phase 2.N.
>

That seem sensible.

I was about to propose a lifetime limit on caching (say a few hours?) to
limit the persistence scope of MitM, i.e., require periodic re-infection.
There may be ways to circumvent this (e.g., the MitM's code sending cache
priming requests), but it seems incrementally better.

--Richard



> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 9:43 PM,  wrote:

> On Monday, April 13, 2015 at 8:57:41 PM UTC-4, northrupt...@gmail.com
> wrote:
> >
> > * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with
> HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less
> secure than HTTP is - to put this as politely and gently as possible - a
> pile of bovine manure
>
> This feature (i.e. opportunistic encryption) was implemented in Firefox
> 37, but unfortunately an implementation bug made HTTPS insecure too. But I
> guess Mozilla will fix it and make this feature available in a future
> release.
>
> > * Support for a decentralized (blockchain-based, ala Namecoin?)
> certificate authority
> >
> > Basically, the current CA system is - again, to put this as gently and
> politely as possible - fucking broken.  Anything that forces the world to
> rely on it exclusively is not a solution, but is instead just going to make
> the problem worse.
>
> I don't think the current CA system is broken. The domain name
> registration is also centralized, but almost every website has a hostname,
> rather than using IP address, and few people complain about this.
>

I would also note that Mozilla is contributing heavily to Let's Encrypt,
which is about as close to a decentralized CA as we can get with current
technology.

If people have ideas for decentralized CAs, I would be interested in
listening, and possibly adding support in the long run.  But unfortunately,
the state of the art isn't quite there yet.

--Richard




> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 11:26 PM,  wrote:

> > * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with
> HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less
> secure than HTTP is - to put this as politely and gently as possible - a
> pile of bovine manure
>
> I am against this. Both are insecure and should be treated as such. How is
> your browser supposed to know that gmail.com is intended to serve a
> self-signed cert? It's not, and it cannot possibly know it in the general
> case. Thus it must be treated as insecure.
>

This is a good point.  This is exactly why the opportunistic security
feature in Firefox 37 enables encryption without certificate checks for
*http* resources.

--Richard



> > * Support for a decentralized (blockchain-based, ala Namecoin?)
> certificate authority
>
> No. Namecoin has so many other problems that it is not feasible.
>
> > Basically, the current CA system is - again, to put this as gently and
> politely as possible - fucking broken.  Anything that forces the world to
> rely on it exclusively is not a solution, but is instead just going to make
> the problem worse.
>
> Agree that it's broken. The fact that any CA can issue a cert for any
> domain is stupid, always was and always will be. It's now starting to bite
> us.
>
> However, HTTPS and the CA system don't have to be tied together. Let's
> ditch the immediately insecure plain HTTP, then add ways to authenticate
> trusted certs in HTTPS by means other than our current CA system. The two
> problems are orthogonal, and trying to solve both at once will just leave
> us exactly where we are: trying to argue for a fundamentally different
> system.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 10:10 PM, Karl Dubost  wrote:

>
> Le 14 avr. 2015 à 10:43, imfasterthanneutr...@gmail.com a écrit :
> > I don't think the current CA system is broken.
>
> The current CA system creates issues for certain categories of population.
> It is broken in some ways.
>
> > The domain name registration is also centralized, but almost every
> website has a hostname, rather than using IP address, and few people
> complain about this.
>
> Two points:
>
> 1. You do not need to register a domain name to have a Web site (IP
> address)
> 2. You do not need to register a domain name to run a local blah.test.site
>
> Both are still working and not deprecated in browsers ^_^
>
> Now the fact to have to rent your domain name ($$$) and that all the URIs
> are tied to this is in terms of permanent identifiers and the fabric of
> time on information has strong social consequences. But's that another
> debate than the one of this thread on deprecating HTTP in favor of HTTPS.
>

This is a fair point, and we should probably figure out a way to
accommodate these.  My inclination is to mostly punt this to manual
configuration (e.g., installing a new trusted cert/override), since we're
not talking about generally available public service on the Internet.  But
if there are more elegant solutions that don't reduce security, I would be
interested to hear them.



> I would love to see this discussion happening in Whistler too.
>

Agreed.  That sounds like an excellent opportunity to hammer out details
here, assuming we can agree on overall  direction in the meantime.

--Richard



>
> --
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Tue, Apr 14, 2015 at 3:55 AM, Yoav Weiss  wrote:

> On Tue, Apr 14, 2015 at 8:22 AM, Anne van Kesteren 
> wrote:
>
> > On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss  wrote:
> > > Limiting new features does absolutely nothing in that aspect.
> >
> > Hyperbole much? CTO of the New York Times cited HTTP/2 and Service
> > Workers as a reason to start deploying HTTPS:
> >
> >   http://open.blogs.nytimes.com/2014/11/13/embracing-https/
>
>
> I stand corrected. So it's the 8th reason out of 9, right before technical
> debt.
>
> I'm not saying using new features is not an incentive, and I'm definitely
> not saying HTTP2 and SW should have been enabled on HTTP.
> But, when done without any real security or deployment issues that mandate
> it, you're subjecting new features to significant adoption friction that is
> unrelated to the feature itself, in order to apply some indirect pressure
> on businesses to do the right thing.
>

Please note that there is no inherent security reason to limit HTTP/2 to be
used only over TLS (as there is for SW), at least not any more than the
security reasons for carrying HTTP/1.1 over TLS.  They're semantically
equivalent; HTTP/2 is just faster.  So if you're OK with limiting HTTP/2 to
TLS, you've sort of already bought into the strategy we're proposing here.



> You're inflicting developer pain without any real justification. A sort of
> collective punishment, if you will.
>
> If you want to apply pressure, apply it where it makes the most impact with
> the least cost. Limiting new features to HTTPS is not the place, IMO.
>

I would note that these options are not mutually exclusive :)  We can apply
pressure with feature availability at the same time that we work on the
ecosystem problems.  In fact, I had a call with some advertising folks last
week about how to get the ad industry upgraded to HTTPS.

--Richard



>
>
> >
> > (And anecdotally, I find it easier to convince developers to deploy
> > HTTPS on the basis of some feature needing it than on merit. And it
> > makes sense, if they need their service to do X, they'll go through
> > the extra trouble to do Y to get to X.)
> >
> >
> Don't convince the developers. Convince the business. Drive users away to
> secure services by displaying warnings, etc.
> Anecdotally on my end, I saw small Web sites that care very little about
> security, move to HTTPS over night after Google added HTTPS as a (weak)
> ranking signal
> <
> http://googlewebmastercentral.blogspot.fr/2014/08/https-as-ranking-signal.html
> >.
> (reason #4 in that NYT article)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Tue, Apr 14, 2015 at 8:32 AM, Eric Shepherd 
wrote:

> Joshua Cranmer [image: 🐧] wrote:
>
>> If you actually go to read the details of the proposal rather than
>> relying only on the headline, you'd find that there is an intent to
>> actually let you continue to use http for, e.g., localhost. The exact
>> boundary between "secure" HTTP and "insecure" HTTP is being actively
>> discussed in other forums.
>>
> My main concern with the notion of phasing out unsecured HTTP is that
> doing so will cripple or eliminate Internet access by older devices that
> aren't generally capable of handling encryption and decryption on such a
> massive scale in real time.
>
> While it may sound silly, those of us who are intro classic computers and
> making them do fun new things use HTTP to connect 10 MHz (or even 1 MHz)
> machines to the Internet. These machines can't handle the demands of SSL.
> So this is a step toward making their Internet connections go away.
>
> This may not be enough of a reason to save HTTP, but it's something I
> wanted to point out.


As the owner of a Mac SE/30 with an 100MB Ethernet card, I sympathize.
However, consider it part of the challenge!  :)  There are definitely TLS
stacks that work on some pretty small devices.

--Richard



>
>
> --
>
> Eric Shepherd
> Senior Technical Writer
> Mozilla 
> Blog: http://www.bitstampede.com/
> Twitter: http://twitter.com/sheppy
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Tue, Apr 14, 2015 at 9:57 AM,  wrote:

> I'm curious as to what would happen with things that cannot have TLS
> certificates: routers and similar web-configurable-only devices (like small
> PBX-like devices, etc).
>
> They don't have a proper domain, and may grab an IP via radvd (or dhcp on
> IPv4), so there's no certificate to be had.
>
> They'd have to use self-signed, which seems to be treated pretty badly
> (warning message, etc).
>
> Would we be getting rid of the self-signed warning when visiting a website?
>

Well, no. :)

Note that the primary difference between opportunistic security (which is
HTTP) and HTTPS is authentication.  We should think about what sorts of
expectations people have for these devices, and to what degree those
expectations can be met.

Since you bring up IPv6, there might be some possibility that devices could
authenticate their IP addresses automatially, using cryptographically
generated addresses and self-signed certificates using the same public key.
http://en.wikipedia.org/wiki/Cryptographically_Generated_Address

--Richard




> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 7:13 PM, Karl Dubost  wrote:

> Richard,
>
> Le 13 avr. 2015 à 23:57, Richard Barnes  a écrit :
> > There's pretty broad agreement that HTTPS is the way forward for the web.
>
> Yes, but that doesn't make deprecation of HTTP a consensus.
>
> > In order to encourage web developers to move from HTTP to HTTPS, I would
> > like to propose establishing a deprecation plan for HTTP without
> security.
>
> This is not encouragement. This is call forcing. ^_^ Just that we are
> using the right terms for the right thing.
>

If so, then it's about the most gentle forcing we could do.  If your web
page works today over HTTP, it will continue working for a long time,
O(years) probably, until we get around to removing features you care about.

The idea of this proposal is to start communicating to web site operators
that in the *long* run, HTTP will no longer be viable, while giving them
time to transition.



> In the document
> >
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
>
> You say:
> Phase 3: Essentially all of the web is HTTPS.
>
> I understand this is the last hypothetical step, but it sounds like a bit
> let's move the Web to XML. It didn't work out very well.
>

The lack of XML doesn't enable things like the Great Cannon.
https://citizenlab.org/2015/04/chinas-great-cannon/



> I would love to have a more secure Web, but this can not happen without a
> few careful consideration.
>
> * Third tier person for certificates being mandatory is a no-go. It
> creates a system of authority and power, an additional layer of hierarchy
> which deeply modify the ability for anyone to publish and might in some
> circumstances increase the security risk.
>
> * If we have to rely, cost of certificates must be zero. These for the
> simple reason than not everyone is living in a rich industrialized country.
>

There are already multiple sources of free publicly-trusted certificates,
with more on the way.
https://www.startssl.com/
https://buy.wosign.com/free/
https://blog.cloudflare.com/introducing-universal-ssl/
https://letsencrypt.org/



> * Setup and publication through HTTPS should be as easy as HTTP. The Web
> brought a publishing power to any individuals. Imagine cases where you need
> to create a local network, web developing on your computer, hacking a
> server for your school, community, etc. If it relies on a heavy process, it
> will not happen.
>

I agree that we should work on this, and Let's Encrypt is making a big push
in this direction.  However, we're not that far off today.   Most hosting
platforms already allow HTTPS with only a few more clicks.  If you're
running your own server, there's lots of documentation, including
documentation provided by Mozilla:

https://mozilla.github.io/server-side-tls/ssl-config-generator/?1

In other words, this is a gradual plan, and while you've raised some
important things to work on, they shouldn't block us getting started.

--Richard




>
>
> So instead of a plan based on technical features, I would love to see a:
> "Let's move to a secure Web. What are the user scenarios, we need to solve
> to achieve that."
>
> These user scenarios are economical, social, etc.
>
>
> my 2 cents.
> So yes, but not the way it is introduced and plan now.
>
>
> --
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Tue, Apr 14, 2015 at 5:59 PM,  wrote:

> On Tuesday, April 14, 2015 at 5:39:24 AM UTC-7, Gervase Markham wrote:
> > On 14/04/15 01:57, northrupt...@gmail.com wrote:
> > > * Less scary warnings about self-signed certificates (i.e. treat
> > > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do
> > > with HTTPS+selfsigned now); the fact that self-signed HTTPS is
> > > treated as less secure than HTTP is - to put this as politely and
> > > gently as possible - a pile of bovine manure
> >
> > http://gerv.net/security/self-signed-certs/ , section 3.
>
> That whole article is just additional shovelfuls of bovine manure slopped
> onto the existing heap.
>
> The article assumes that when folks connect to something via SSH and
> something changes - causing MITM-attack warnings and a refusal to connect -
> folks default to just removing the existing entry in ~/.ssh/known_hosts
> without actually questioning anything.  This conveniently ignores the fact
> that - when people do this - it's because they already know there's been a
> change (usually due to a server replacement); most folks (that I've
> encountered at least) *will* stop and think before editing their
> known_hosts if it's an unexpected change.
>
> "The first important thing to note about this model is that key changes
> are an expected part of life."
>
> Only if they've been communicated first.  In the vast majority of SSH
> deployments, a host key will exist at least as long as the host does (if
> not longer).  If one is going to criticize SSH's model, one should, you
> know, actually freaking understand it first.
>
> "You can't provide [Joe Public] with a string of hex characters and expect
> it to read it over the phone to his bank."
>
> Sure you can.  Joe Public *already* has to do this with social security
> numbers, credit card numbers, checking/savings account numbers, etc. on a
> pretty routine basis, whether it's over the phone, over the Internet, by
> mail, in person, or what have you.  What makes an SSH fingerprint any
> different?  The fact that now you have the letters A through F to read?
> Please.
>
> "Everyone can [install a custom root certificate] manually or the IT
> department can use the Client Customizability Kit (CCK) to make a custom
> Firefox. "
>
> I've used the CCK in the past for Firefox customizations in enterprise
> settings.  It's a royal pain in the ass, and is not nearly as viable a
> solution as the article suggests (and the alternate suggestion of "oh just
> use the broken, arbitrarily-trusted CA system for your internal certs!" is
> a hilarious joke at best; the author of the article would do better as a
> comedian than as a serious authority when it comes to security best
> practices).
>
> A better solution might be to do this on a client workstation level, but
> it's still a pain and usually not worth the trouble for smaller enterprises
> v. just sticking to the self-signed cert.
>
> The article, meanwhile, also assumes (in the section before the one you've
> cited) that the CA system is immune to being compromised while DNS is
> vulnerable.  Anyone with a number of brain cells greater than or equal to
> one should know better than to take that assumption at face value.
>
> >
> > But also, Firefox is implementing opportunistic encryption, which AIUI
> > gives you a lot of what you want here.
> >
> > Gerv
>
> Then that needs to happen first.  Otherwise, this whole discussion is
> moot, since absolutely nobody in their right mind would want to be
> shoehorned into our current broken CA system without at least *some*
> alternative.
>

OE shipped in Firefox 37.  It's currently turned off pending a bugfix, but
it will be back soon.

--Richard



> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-16 Thread Richard Barnes
On Wed, Apr 15, 2015 at 9:13 PM, Karl Dubost  wrote:

> As Robert is saying:
>
> Le 16 avr. 2015 à 00:29, Robert Kaiser  a écrit :
> > I think we need to think very hard about what reasons people have to
> still not use TLS and how we can help them to do so.
>
> Definitely.
> The resistance in this thread is NOT about "people against security", but
> 1. we want to be able to choose
> 2. if we choose safe, we want that choice to be easy to activate.
>

Please see McManus's argument for why putting all the choice in webmasters'
hands is not really the best option for today's web.



> # Drifting
>
> Socially, eavesdropping is part of our daily life. We go to a café, we are
> having a discussion and people around you may listen what you are saying.
> You read a book in the train, a newspaper and people might see what you are
> reading.
>
> We adjust the type of discussions depending on the context. The café is
> too "dangerous", too "privacy invasive" and we decide to go to a safer
> environment, sometimes a safer environment is not necessary being hidden
> (encryption), but being more public. As I said contexts.
>
> (Note above my usage of the word safe and not secure)
>

Of course, in the café, you can evaluate who has access to your
communication -- you can look around and see.  When you load a web page,
your session traverses, on average, four different entities [1], any of
whom can subvert your communications.  The user has no visibility in to
this path, not least because it often can't be predicted in advance.
You're in the UK, talking to a server in Lebanon.  Does your path traverse
France?  Possibly!  (Probably!)

The idea that the user can evaluate the trustworthiness of every ISP
between his computer and a web server seems pretty outlandish.  Maybe in
some limited development or enterprise environments, but certainly not for
the general web.


# Back to the topic
>
> It's important for the user to understand the weaknesses and the strength
> of the environment so they can make a choice. You could almost imagine that
> you do not care to be plain text until a moment where you activate a secure
> mode. (change of place in the cafe)
>
> Also we need to think in terms of P2P communications, not only
> broadcaster-consumers (1-to-many). If the Web becomes something which is
> harder and harder to start hacking on and communicating with your peers,
> then we reinforce the power of big hierarchical structures and we change
> the balance that Web brought over the publishing/media industry. We should
> always strive for bringing the tools that empower individual people with
> their ideas and expressions.
>
> Security is part of it. But security doesn't necessary equate to safer.
> It's just a tool that can be used in some circumstances.
>
> Do we want to deprecate HTTP? Or do we want to make it more obvious when
> the connection is not secure? These are two very different things.
>

http://i.imgur.com/c7NJRa2.gif

--Richard

[1] http://bgp.potaroo.net/as6447/
[2]
http://www.lemonde.fr/pixels/article/2015/04/16/les-deputes-approuvent-un-systeme-de-surveillance-du-trafic-sur-internet_4616652_4408996.html



>
> --
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-16 Thread Richard Barnes
Hey Katelyn,

Thanks for bringing up these considerations.

On Thu, Apr 16, 2015 at 5:35 AM, Katelyn Gadd  wrote:

> I expressed my opinion on this subject at length on the Chrome lists
> when they made a similar proposal. I'll summarize it here, though,
> since I feel the same way about FF deprecating non-encrypted HTTP:
>
> I think HTTPS-everywhere is a great ideal if we can achieve it, but in
> the vast majority of discussions it feels like people are
> underestimating the difficulties involved in deploying HTTPS in
> production. In particular, I think this puts a significant group of
> people at risk and they don't necessarily have the ability to advocate
> for themselves in environments like this. Low-income internet users,
> teenagers, and people in less-developed nations are more likely to be
> dependent on inexpensive-or-free services to put content up on the
> internet. In the best case they might have a server of their own they
> can configure for HTTPS (given sufficient public documentation & time)
> but the task of getting a certificate is a huge hurdle. I've acquired
> personal certificates in the past through the normal paid CA pipeline
> and the experience was bad enough as someone who lives in Silicon
> Valley and can afford a certificate.
>

Let me try to disentangle two threads here:

1. "Zero-rated" services [1].  These are services where the carrier agrees
not to charge the user for data to access certain web sites.  Obviously,
these can play an important role for carriers in developing economies
especially.  HTTPS can have an impact here, since it prevents the carrier
from seeing anything beyond the hostname that the user is connecting to.  I
would observe, however, that (1) most zero-rating is done on a hostname
basis anyway, and (2) even if more granularity is desired, there are
solutions for this that don't involve DPI, e.g., having the zero-rated site
send a ping to the carrier in JS.

2. Requirement for free/inexpensive/hobbyist services to get certificates.
Examples of free certificate services have been given several times in this
thread.  Several hosting platforms already offer HTTPS helper tools.  And
overall, I think the trend is toward greater usability.  So the situation
is pretty OK (not great) today, and it's getting better.  If we think of
this HTTP deprecation plan not as something we're doing today, but
something we'll be doing over the next few years, it seems like deprecating
HTTP and improved HTTPS deployability can develop together.



> There are some steps being taken to reduce the difficulty here, and I
> think that's a good start. StartSSL offers free certs, and that's
> wonderful (aside from the fact that their OCSP servers broke and took
> down a portion of the web the other day...) and if letsencrypt ships
> it seems like it could be a comprehensive solution to the problem. If
> unencrypted HTTP is deprecated it *must* be not only simple for
> individuals to acquire a certificate, but it shouldn't require them to
> interact with western governments/business regulations, and it
> shouldn't require them to compromise anonymity. Anonymity is an
> important feature of web services and especially important for
> disadvantaged people. Unencrypted pages mean that visitors are
> potentially at risk and their sites can be MITMd, but a MITM is at
> least not going to expose their real name or real identity and put
> them at risk from attack. Past security breaches at various internet
> services & businesses suggest that if an individual has to provide
> identifying information to a CA - even if it is promised to be kept
> private - they are putting themselves at risk. Letsencrypt seems to
> avoid this requirement so I look forward to it launching in the near
> future.
>

I'm not sure what the state of the art with StartCom is, but when I got a
certificate from WoSign the other day [2], they didn't require any
identification besides an email address.  As far as I know, Let's Encrypt
will require about the same level of information.  There's certainly
nothing about the standards or norms for the web PKI that requires CAs to
collect identifying information about applicants for certificates.



> I also think there are potential negative consequences to deprecating
> HTTP if the process of rolling out HTTPS is prohibitively difficult
> for amateur/independent developers: In practice it may force many of
> them to move to hosting their content on third-party servers/services
> that provide HTTPS, which puts them at risk of having their content
> tampered with or pulled by the service provider. In this scenario I'm
> not sure we've won anything because we've made the site look secure
> when in fact we've simplified the task of altering site content
> without the author or visitor's knowledge.
>

Maybe I'm coming from a position of privilege here, but the difficulty of
setting up HTTPS seems exaggerated to me.  Mozilla already provides an
HTTPS config generator [3], and I kn

Re: Intent to deprecate: Insecure HTTP

2015-04-16 Thread Richard Barnes
On Thu, Apr 16, 2015 at 8:16 AM,  wrote:

> > > I think that you should avoid making this an exercise in marketing
> Mozilla's "Let's Encrypt" initiative.
> >
> > Perhaps that's why Richard took the time to make a comprehensive list of
> > all known sources of free certs, rather than just mentioning LE?
>
> Yeah, that's what I thought when I first posted here.  Now I'm not so
> sure.  You do not seem interested in hearing about any other technical
> possibilities other than Let's Encrypt, which you seem to have already
> chosen.
>

I hope it's clear that I and others have brought up Let's Encrypt only as
an example of how it's becoming easier to get a certificate -- along with
other offerings from folks like StartCom and WoSign.



> For example:
> - You say "there is only secure/not secure".  Traditionally, we have
> things like defense in depth, and multiple levels of different sources of
> authentication.  I am hearing: "You will either have a Let's Encrypt
> certificate or you don't".  Heck, let's get rid of EV certificate
> validation too while we are at it: we don't want to have to do special
> vetting for banking and medical websites, because that doesn't fit in with
> Let's Encrypt's business model.
>

The focus of this thread is moving the web toward a basic level of
security.  The fact of HTTPS today is that DV is the minimum acceptable
standard.   Additional levels above HTTPS+DV are great, but they're gravy
on top of having protection against network attackers.  Opportunistic
security is also a fine idea, but it's no HTTPS.  And of course non of this
has to do with Let's Encrypt.

- You don't want to hear about non-centralized security models.  DANE
> provides me with control over certificate pinning for people visiting my
> websites.  You seem to be saying: Mozilla's CA will have full control over
> all websites.  I'm not sure why you'd want that level of responsibility.
> If you don't like DANE, explain why, and propose something else that is
> non-centralized and not under Mozilla's control.
>

Whether or not DANE is supported is not germane to this thread, unless you
think a lack of DANE support is a blocker to broader HTTPS adoption.

(I look forward to your explanation of how a strict hierarchy like the DNS
is not "centralized".)



> - Personally, I think that the move away from http:// is a good idea, and
> the opportunistic encryption features are an excellent start.  I am not
> clear why you think that we *technically* need to go beyond this.  Other
> than to force people to use a centralized identity system.  Which is?
> Hmm... Let's Encrypt.
>
>
> I *really* hope I am misunderstanding this thread...  I don't think of
> Mozilla as a company that would try to do this.
>

As I hope is apparent by now from the above and from Adam's response, this
thread has nothing to do with promoting LE.  It's all about promoting
HTTPS, whether your cert comes from LE, from another CA, or from DANE.

--Richard



> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-23 Thread Richard Barnes
On Tue, Apr 21, 2015 at 9:56 AM, Mike Hoye  wrote:

> On 2015-04-21 6:43 AM, skuldw...@gmail.com wrote:
>
>> I know, not that well explained and over simplified. But the concept is
>> hopefully clear, but in case it's not...
>>
> For what it's worth, a lot of really smart people have been thinking about
> this problem for a while and there aren't a lot of easy buckets left on
> this court. Even if we had the option of starting with a clean slate it's
> not clear how much better we could do, and scrubbing the internet's
> security posture down to the metal and starting over isn't really an
> option. We have to work to improve the internet as we find it,
> imperfections and tradeoffs and all.
>
> Just to add to this discussion, one point made to me in private was that
> HTTPS-everywhere defangs the network-level malware-prevention tools a lot
> of corporate/enterprise networks use. My reply was that those same
> companies have tools available to preinstall certificates in browsers they
> deploy internally - most (all?) networking-hardware companies will sell you
> tools to MITM your own employees - which would be an acceptable solution in
> those environments where that's considered an acceptable solution, and not
> a thing to block on.
>

Yeah, I agree this is an issue, but not a blocker.  It's already a problem
for the ~65% of web transactions that are already encrypted, and people are
already thinking about how to manage these enterprise roots better /
improve user visibility.

--Richard



>
> - mhoye
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-24 Thread richard . barnes
On Thursday, April 23, 2015 at 11:47:14 PM UTC-4, voracity wrote:
> Just out of curiosity, is there an equivalent of:
> 
> python -m SimpleHTTPServer
> 
> in the TLS world currently, or is any progress being made towards that?

openssl req -new -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem
openssl s_server -accept 8000 -key key.pem -cert cert.pem -HTTP

Not quite as simple, but not far off.  With the above, you can get 
, as long as you're willing to click through a 
certificate warning.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-30 Thread Richard Barnes
Hey all,

Thanks a lot for the really robust discussion here.  There have been
several important points raised here:

1. People are more comfortable with requiring HTTPS for new features than
requiring it for features that are currently accessible to non-HTTPS
origins.  Removing or limiting features that are currently available will
require discussion of trade-offs between security and compatibility.

2. Enabling HTTPS can still be a challenge for some website operators.

3. HTTP caching is an important feature for constrained networks.  Content
served over

4. There will still be a need for the browser to be able to connect to
things like home routers, which often don’t have certificates

5. It may be productive to take some interim steps, such as placing
limitations on cookies stored by non-HTTPS sites.

It seems to me that these are important considerations to keep in mind as
we move more of the web to HTTPS, but they don’t need to be blocking on a
gradual deprecation of non-secure HTTP.  (More detailed comments are
below.)  So I’m concluding that there’s rough consensus here behind the
idea of limiting features to secure contexts as a way to move the web
toward HTTPS.   I’ve posted a summary of our plans going forward on the
Mozilla security blog [1].

Thanks
--Richard

[1]
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/ ‎

Some more detailed thoughts:

1. Obviously, lots of caution will be necessary if and when we start
removing features from non-secure contexts.  However, based on the
discussions of things like limiting cookies in this thread, and other
discussions in the “powerful features” threads, it seems that there’s still
some interest in trying to find features where the degree of breakage is
small enough to be compensated by the security benefit.  So it makes sense
to keep the removal or limitation of existing features on the overall
roadmap, with the caveat that we will need to calibrate the
breakage/security trade-offs before taking action.

2. While enabling HTTPS inherently involves more work than enabling
non-secure HTTP, there’s a lot of work going on to make it easier, ranging
from Cloudflare’s Universal SSL to Let’s Encrypt.  Speaking practically,
this non-secure HTTP deprecation process won’t be causing problems for
existing non-secure websites for some time, so there’s time for these
efforts to make progress before the pressure to use HTTPS really sets in.

3. Caching and performance are important, but so is user privacy.  It is
possible to do secure caching, but it will need to be carefully engineered
to avoid leaking more information than necessary.  (I think Martin Thomson
and Patrick McManus have done some initial work here.)  As with the prior
point, the fact that this non-secure HTTP deprecation will happen gradually
means that we have time to evaluate the requirements here and develop any
technology that might be necessary.

4. This seems like a problem that can be solved by the home router vendors
if they want to solve it.  For example, Vendor X could provision routers
with names like “router-123.vendorx.com” and certificates for those names,
and print the router name on the side of the router (just like WPA keys
today).  Also, interfaces to these sorts of devices don’t typically use a
lot of advanced web features, so may not be impacted by this deprecation
plan for a long time (if ever).

5. We can take these interim steps *and* work toward deprecation.


On Mon, Apr 13, 2015 at 7:57 AM, Richard Barnes  wrote:

> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.
>
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.
> Broadly speaking, this plan would entail  limiting new features to secure
> contexts, followed by gradually removing legacy features from insecure
> contexts.  Having an overall program for HTTP deprecation makes a clear
> statement to the web community that the time for plaintext is over -- it
> tells the world that the new web uses HTTPS, so if you want to use new
> things, you need to provide security.  Martin Thomson and I drafted a
> one-page outline of the plan with a few more considerations here:
>
>
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
>
> Some earlier threads on this list [5] and elsewhere [6] have discussed
> deprecating insecure HTTP for "powerful features".  We think it would be a
> simpler and clearer statement to avoid the discussion of which features are
> "powerful" and focus on moving all features to HTTPS, powerful or not.
>
> The goal of t

Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Richard Barnes
On Thu, Apr 30, 2015 at 9:50 PM,  wrote:

> > 1.Setting a date after which all new features will be available only to
> secure websites
>
> I propose the date to be one year after Let's Encrypt is launched, which
> is about mid-2016.
>

I was hoping for something a little sooner, given that we're talking about
*future* stuff.  But I'm open to discuss.



> By the way, I hope Mozilla's own official website (Mozilla.org) should
> move to HTTPS-only as soon as possible. Currently www.mozilla.org forces
> HTTPS, but many mozilla.org subdomains do not, such as
> http://people.mozilla.org/, http://release.mozilla.org/, and
> http://website-archive.mozilla.org. It will be great if *.Mozilla.org can
> be added to browsers' built-in HSTS list.
>

100% agree.  There's already a bunch of Mozilla domains on the HSTS preload
list, but they should all be there.

--Richard

___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Richard Barnes
On Fri, May 1, 2015 at 10:13 AM,  wrote:

> Here we go again. Listen up, guys. There are vast numbers of legacy sites
> without the technical or financial means to convert to https:,


Of course I agree that we should not be brushing aside the little guys.
But from where I sit, I'm seeing lots of evidence that deploying HTTPS is
getting a lot easier (Universal SSL, Mozilla TLS Config generator, etc.),
and no actual owners of small sites saying that they have seriously looked
at deploying HTTPS and found that they could not.  Do you know any that you
could get to chime in here?


> nor are many serving material that fundamentally needs to be encrypted.


Please keep in mind that "needs to be encrypted" is a very tough question
to get right.  Who would have thought that Baidu's analytics JS needed to
be encrypted until Github got DDoS'ed?  Who would have thought that you
needed to encrypt your ads until Comcast started replacing them?

A big part of the motivation for having HTTPS be the default is that
historically we have gotten decisions about what needs to be encrypted
wrong over and over again.  Using HTTPS by default avoids having to take
the risk of getting it wrong.

--Richard



> While I've long been a proponent of opportunistic crypto -- particularly
> by leveraging self-signed certs which I know you all despise with a
> vengeance -- moves to turn http: sites generally into pariahs is a display
> of technological arrogance par excellence, *unless* you intend to also
> provide funding and personnel to handle the conversions for legacy sites
> that do not have the financial or time resources to make the necessary
> initial and ongoing changes for themselves. There is crypto-reality and
> crypto-religion. And what I mostly see here is the latter, with concern for
> the little guys brushed under the carpet as usual. For shame.
>





>
> --Lauren--
> Lauren Weinstein (lau...@vortex.com): http://www.vortex.com/lauren
> Founder:
>  - Network Neutrality Squad: http://www.nnsquad.org
>  - PRIVACY Forum: http://www.vortex.com/privacy-info
> Co-Founder: People For Internet Responsibility:
> http://www.pfir.org/pfir-info
> Member: ACM Committee on Computers and Public Policy
> Lauren's Blog: http://lauren.vortex.com
> Google+: http://google.com/+LaurenWeinstein
> Twitter: http://twitter.com/laurenweinstein
> Tel: +1 (818) 225-2800 / Skype: vortex.com
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Richard Barnes
On Fri, May 1, 2015 at 12:40 PM, Eric Shepherd 
wrote:

> Martin Thomson wrote:
>
>> There are two aspects to this: the software, and the content.
>>
>> If software cannot be updated, that a problem in its own right.  The
>> idea that you could release your server onto the Internet to fend for
>> itself for 20 years was a dream of the 90s that has taken a while to
>> die.  Just as you have to feed it electricity and packets, you have to
>> maintain software too.
>>
> In my case, the situation is that I have classic computers running 1-10
> megahertz processors, for which encrypting and decrypting SSL is not a
> plausible option.


Have you tried?  I have distinct memories of running Netscape Navigator on
an SE/30, which according to wikipedia had a 16MHz processor.  It seems
like without having to run the UI, you could run an HTTPS server that did
OK.

--Richard


> These computers have a burgeoning "retro" fanbase trying to push them to
> do new and interesting things, and a lot of that involves writing software
> that works over the Web using standard protocols. These efforts cannot be
> sustained in an HTTPS-only world.
>
> This has personal meaning to me as a long-time member of the
> retrocomputing community, and as the author of software that runs on these
> machines, including multiple programs that use HTTP to do so. If things
> start requiring HTTPS, our ability to continue to innovate and try to push
> these machines to do more and more things previously unheard of starts to
> come to an end. I don't like that notion very much.
>
> Is it a niche case? Sure. But it's not one to be dismissed outright
> without at least having its voice heard, so here I am, representing our
> little crowd.
>
> I'm not trying to stir up trouble or be a pain in the ass. Just pointing
> out that there honestly, truly are valid use cases for straight-up HTTP,
> even if they're rare.
>
> (FWIW, I concede that the "not everything needs encryption" position is a
> little overstated, but I also think that there really is stuff that doesn't
> need encrypting, even if it's a tiny fraction of the Web's traffic).
>
> --
>
> Eric Shepherd
> Senior Technical Writer
> Mozilla 
> Blog: http://www.bitstampede.com/
> Twitter: http://twitter.com/sheppy
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Richard Barnes
On Fri, May 1, 2015 at 11:30 AM, Martin Thomson  wrote:

> On Fri, May 1, 2015 at 11:25 AM, Chris Hofmann 
> wrote:
> > Is there a wiki page or some other comprehensive reference that defines
> the
> > issues and arguments around  this central question?
>
> Richard was - I think - in the process of assembling an FAQ that
> covered this and other issues.  This is definitely FAQ territory.
>

Yup, working on it.  Hopefully have a first draft up today.

--Richard


>
> Joe also provided this link up-thread:
>
> https://cdt.org/files/2015/02/CDT-comments-on-the-use-of-encryption-and-anonymity-in-digital-communcations.pdf
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


State synchronization - use cases?

2015-06-26 Thread Richard Barnes
Hey dev.platform folks,

Some of us in the security engineering group have been chatting with cloud
services about making an improved way to maintain state in the browser.
Our use cases are things like:

- Revoked certificates (OneCRL)
- HSTS / HPKP preloads

We're trying to get an idea of how big a data set we might want to
maintain, so I wanted to see if anyone else had use cases that might
benefit from such a mechanism.  The critical properties for your data set
to be suitable are:

1. You want every browser to have the same set of data
2. The data change relatively slowly (we are aiming for ~24hr deliveries)

If anyone has use cases in addition to the above, please let me know.

Thanks a lot,
--Richar
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: State synchronization - use cases?

2015-06-26 Thread Richard Barnes
Yes; that is what we currently use for OneCRL.  The idea here is to
make something that's more generic, in order to more easily support
pushing new types of data.

That said, I suppose we could envision moving the add on block list to
this service if it happens.  But that might not be a priority, because
it already exists.

Sent from my iPhone.  Please excuse brevity.

> On Jun 26, 2015, at 10:56, Dave Townsend  wrote:
>
> The blocklist service also downloads about once a day
>
> On Fri, Jun 26, 2015 at 10:49 AM, Anne van Kesteren 
> wrote:
>
>> On Fri, Jun 26, 2015 at 10:38 AM, Richard Barnes 
>> wrote:
>>> If anyone has use cases in addition to the above, please let me know.
>>
>> Public suffix? Getting that updated more frequently would be good.
>> Especially now sites like GitHub can use it to silo user data.
>>
>>
>> --
>> https://annevankesteren.nl/
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Secure contexts required for new web platform features

2015-06-30 Thread Richard Barnes
As a next step toward deprecating non-secure HTTP [1], we are making the
following two changes to how we develop new web platform features,
effective immediately:

First, when we work on developing specifications for new web platform
features, we will make sure that these specifications require secure
contexts [2].

Second, when we implement new web platform features, they will be enabled
only on secure contexts.  Exceptions can be granted, but will need to be
justified as part of the Intent to Implement [3] and Intent to Ship process.

[1]
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/
[2] http://www.w3.org/TR/powerful-features/
[3] https://wiki.mozilla.org/WebAPI/ExposureGuidelines#Intent_to_Implement
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Busy indicator API

2015-07-13 Thread Richard Barnes
On Sun, Jul 5, 2015 at 5:11 PM, Anne van Kesteren  wrote:

> A while back there have been some requests from developers (seconded
> by those working on GitHub) to have an API to indicate whether a site
> is busy with one thing or another (e.g. networking).
>
> They'd like to use this to avoid having to create their own UI. In
> Firefox this could manifest itself by the spinner that replaces the
> favicon when loading a tab.
>
> Is there a reason we shouldn't expose a hook for this?
>

Obligatory: Will this be restricted to secure contexts?



>
> --
> https://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On the future of and application/x-x509-*-cert MIME handling

2015-07-30 Thread Richard Barnes
On Thu, Jul 30, 2015 at 6:33 AM, Anne van Kesteren  wrote:

> On Thu, Jul 30, 2015 at 12:28 PM, Teoli
>  wrote:
> > Do you think it is already worth to flag it as deprecated in the MDN
> > documentation as Google plans to remove it too?
>
> Yeah, seems worth a note at least given that Microsoft doesn't ship it
> either (nor plans to ever). I'll probably get the HTML Standard
> updated too in due course.
>

+1



>
>
> --
> https://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On the future of and application/x-x509-*-cert MIME handling

2015-07-30 Thread Richard Barnes
On Thu, Jul 30, 2015 at 6:53 AM, Hubert Kario  wrote:

> On Wednesday 29 July 2015 16:35:41 David Keeler wrote:
> > [cc'd to dev-security for visibility. This discussion is intended to
> > happen on dev-platform; please reply to that list.]
> >
> > Ryan Sleevi recently announced the pre-intention to deprecate and
> > eventually remove support for the  element and special-case
> > handling of the application/x-x509-*-cert MIME types from the blink
> > platform (i.e. Chrome).
> >
> > Rather than reiterate his detailed analysis, I'll refer to the post here:
> >
> >
> https://groups.google.com/a/chromium.org/d/msg/blink-dev/pX5NbX0Xack/kmHsyMG
> > JZAMJ
>
> 
> Well, gmail doesn't support S/MIME or GPG/MIME so obviously  is
> useless and should be removed.
> 
>
> > Much, if not all, of that reasoning applies to gecko as well.
> > Furthermore, it would be a considerable architectural improvement if
> > gecko were to remove these features (particularly with respect to e10s).
> > Additionally, if they were removed from blink, the compatibility impact
> > of removing them from gecko would be lessened.
> >
> > I therefore propose we follow suit and begin the process of deprecating
> > and removing these features. The intention of this post is to begin a
> > discussion to determine the feasibility of doing so.
>
> because pushing people to use Internet Explorer^W^W Spartan^W Edge in
> enterprise networks is a good plan to continue loosing market share for
> Mozilla products! /s
>
> lack of easy, cross-application certificate deployment is the _reason_ for
> low
> rates of deployment of client certificates, but where they are deployed,
> they
> are _critical_
>

 doesn't help you with cross-application deployment.  After all, IE
doesn't support it.



> you really suggest I should tell regular people to copy paste CSR's, keep
> safe
> their private keys and be able to pair keys to certs when even programmers
> and
> system administrators have problems with current certificate deployments?
> (user certs vs web server certs)
>

The point has been made a couple of times that you can pretty effectively
polyfill  with either WebCrypto or JS crypto libraries.  You can do
the whole key generation and enrollment process that way, and the only
manual step is to download the cert and import it into the browser.  Do it
with JS, and you can support far more browsers than  supports today.

--Richard


> suggesting removal of such a feature because is not often used is like
> suggesting removal of mains valve because it is not used often
>
> And I say it as a former sysadmin, not Red Hat employee.
> --
> Regards,
> Hubert Kario
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform