a howto for a pref experiment would be awesome..
On Wed, May 24, 2017 at 9:03 PM, Eric Rescorla wrote:
> What's the state of pref experiments? I thought they were not yet ready.
>
> -Ekr
>
>
> On Thu, May 25, 2017 at 7:15 AM, Benjamin Smedberg
> wrote:
>
> > Is there a particular reason this is
Hi All, FYI:
Soon we'll be launching a nightly based pref-flip shield study to confirm
the feasibility of doing DNS over HTTPs (DoH). If all goes well the study
will launch Monday (and if not, probably the following Monday). It will run
<= 1 week. If you're running nightly and you want to see if y
most 24 hrs but that data is limited to name, dns type, a timestamp, a
response code, and the CDN node that served it.
On Sun, Mar 18, 2018 at 11:51 PM, Dave Townsend
wrote:
> On Sat, Mar 17, 2018 at 3:51 AM Patrick McManus
> wrote:
>
>> DoH is an open standard and for this test
The objective here is a net improvement for privacy and integrity. It is
indeed a point of view with Nightly acting as an opinionated User Agent on
behalf of its users. IMO we can't be afraid of pursuing experiments that
help develop those ideas even when they move past traditional modes.
Tradition
\o/ !!
On Friday, March 23, 2018, Valentin Gosu wrote:
> Hello everyone,
>
> I would like to announce that with the landing of bug 1447194, all nsIURI
> implementations in Gecko are now threadsafe, as well as immutable. As a
> consequence, you no longer have to clone a URI when you pass it arou
imo, you really need to add a pref to cover this (I'm not saying make it
opt-in, just preffable.). It will break something somewhere and at least
you can tell that poor person they can have compat back via config.
It also has a very small possibility of breaking enterprises or something
we would d
no - generally we don't do origin based telemetry for privacy reasons
On Wed, Sep 9, 2015 at 9:51 PM, Karl Dubost wrote:
> Hi,
>
> Do we have a way to evaluate the number of domain names (not HTTP
> requests) which are communicating with Firefox using HTTP/2?
>
> Question triggered by the recent
FYI
Bug: 366559
Release: On the trains for 44
Summary: Brotli is a new generation lossless compression format. Its
incompatible with zlib, but anywhere from roughly 20% to 40% denser and
roughly as fast for decompression. HTTP content encodings are negotiated
for on-the-wire transmission and do n
On Fri, Dec 4, 2015 at 10:56 PM, Eric Rescorla wrote:
>
>
> Color me unconvinced. One of the major difficulties with consumer
> electronics devices
> that are nominally connectable to your computer is that the vendors do a
> bad job
> of making it possible for third party vendors to talk to them.
..you should be able to just use asyncopen2 - it will do security checks
for you that you may have needed to do outside asyncopen (e.g. csp) and
will reliably deal with things like redirects. :sicking or :ckerschb for
followups.
On Mon, Dec 7, 2015 at 6:00 PM, Philip Chee wrote:
> I came across
On Fri, Jan 8, 2016 at 8:32 PM, Eric Rahm wrote:
> Why is this so cool? Well now you don't need to restart your browser to
> enable logging [1]. You also don't have to set env vars to enable logging
> [2].
>
epic! thank you.
___
dev-platform mailing l
On Fri, Jan 15, 2016 at 10:58 AM, Ehsan Akhgari
wrote:
> Please let me know if you have any questions or concerns.
or cheers.
cheers!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
Default should probably be fail push rather than auto cancel.. But +1 to
opting into parallel push explicitly. I've certainly used that on a few
occasions.
But the PSA here may be the most important part..
On Apr 15, 2016 3:37 PM, "Jonas Sicking" wrote:
We could also make the default behavior be
You aren't clear on what level you want to capture the data. The gold
standard to see exactly what is communicated would be wireshark. When https
is used (hopefully all the time) it can automatically decode the traces if
you provide the key material -
https://developer.mozilla.org/en-US/docs/Mozill
I don't think the case for making this change (even to release builds) has
been successfully made yet and the ability to debug and iterate on the
quality of the application network stack is hurt by it.
The Key Log - in release builds - is part of the debugging strategy and is
used fairly commonly
Hi All,
Tl;dr; The necko team has for months been chasing a windows only top
crasher. It is a shutdown hang - Bug 1158189. The crash stopped happening
on nightly-48 back in March and that ‘fixed state’ has been riding the
normal trains. Last weekend it returned to crashing on aurora-48 but not on
as you note the whiteboard tags are permissionless. That's their killer
property. Keywords as you note are not, that's their critical weakness.
instead of fixing that situation in the "long term" can we please fix that
as a precondition of converting things? Mozilla doesn't need more
centralized s
that's useful thanks. I think the word amnesty implied the death penalty
for existing whiteboard tags.
what it sounds like is you're just offering (for a limited time) to do
conversions on an opt-in basis? That's great.
-P
On Wed, Jun 8, 2016 at 3:11 PM, Emma Humphries wrote:
>
>
> > On Jun 8
I do use it - but LSPs obviously cause the networking team more trouble
than other folks.
I have no objection if the UI just wants to move it to somewhere less
obtrusive though - as long as its there.
On Mon, Jul 25, 2016 at 6:29 PM, Aaron Klotz wrote:
> On 7/25/2016 12:20 AM, Nicholas Netherco
+1 on MOZ_DIAGNOSTIC_ASSERT - its been very useful to me as well.
On Thu, Sep 22, 2016 at 6:40 AM, Bobby Holley wrote:
> There's also MOZ_DIAGNOSTIC_ASSERT, which is fatal in pre-release builds
> but not release ones. It can be a good compromise to find bugs in the wild
> when the performance co
Hi All -
Generally speaking releasing more information about what's behind the
firewall is an anti-goal. I have the same reaction others in this thread
have - this api is much more information than what is really needed, and
the information it provides is of questionable usefulness anyhow.
The de
On Thu, Feb 28, 2013 at 10:36 AM, Benjamin Smedberg
wrote:
> Cool. Perhaps we should start out with collecting stories/examples:
>
In that spirit:
What I almost always want to do is simply "for the last N days of
variable X show me a CDF (at even just 10 percentile granularity) for
the histogra
On Fri, Mar 22, 2013 at 6:48 PM, Jason Duell wrote:
> On 03/22/2013 11:46 AM, Christian Biesinger wrote:
>
> Thanks for the many years of leadership, Biesi!
Yes, thank you Christian!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://l
Today I noticed some (relatively) new CDF plots of telemetry histogram
data on metrics.mozilla.com. Maybe in the last week or so?
This makes it much easier to determine medians and 90th percentiles -
which is a very common use case for me. If you haven't seen it I
recommend checking it out.
If, d
I don't really think there is a controversy here network wise - mostly
applicability is a case of "I know it when I see it" and the emphasis here
is on things that are exposed at the webdev level is the right thing.
Sometimes that's markup, sometimes that's header names which can touch on
core prot
On Wed, Jun 26, 2013 at 2:07 PM, Gavin Sharp wrote:
> The scope of the current proposal is what's being debated; I don't think
> there's shared agreement that the scope should be "detectable from web
> script".
>
>
Partially embedded in this discussion is the notion that the open web
requires coo
On Mon, Jul 1, 2013 at 12:43 PM, Anne van Kesteren wrote:
> I'd like to discuss the implications of replacing/morphing Gecko's URL
> parser with/into something that conforms to
> http://url.spec.whatwg.org/
>
I know its not your motivation, but the lack of thread safety in the
various nsIURIs is
this works great for me.. touching network/protocol/http/nsHttpChannel.cpp
and rebuilding with "mach build binaries" runs in 26 seconds compared to 61
with just "mach build", and I see the same ~35 second savings when doing it
on a total nop build (39 vs 5). awesome.
-P
On Tue, Oct 1, 2013 at 9
I was skeptical of this work - so I need to say now that it is paying
dividends bigger and faster than I thought it could. very nice!
On Wed, Nov 20, 2013 at 3:38 AM, Nicholas Nethercote wrote:
> On September 12, a debug clobber build on my new Linux desktop took
> 12.7 minutes. Just then it t
I'm fine with enforcing a gecko wide coding style as long as it comes with
cross platform tools to act as arbiter.. it is something that needs to be
automated and isn't worth the effort of trying to get everybody on the same
page by best effort.
On Mon, Jan 6, 2014 at 5:41 PM, Ehsan Akhgari wrot
I strongly prefer at least a 100 character per line limit. Technology
marches on.
On Mon, Jan 6, 2014 at 9:23 PM, Karl Tomlinson wrote:
> L. David Baron writes:
>
> > I tend to think that we should either:
> > * stick to 80
> > * require no wrapping, meaning that comments must be one paragrap
Typically I have to choose between
1] 80 columns
2] descriptive and non-abbreviated naming
3] displaying a logic block without scrolling
to me, #1 is the least valuable.
On Tue, Jan 7, 2014 at 4:51 PM, Jim Porter wrote:
> On 01/06/2014 08:23 PM, Karl Tomlinson wrote:
>
>> Yes, those are th
+cc
On Tue, Feb 18, 2014 at 7:56 PM, Neil wrote:
> Where can I find documentation for the new necko cache? So far I've only
> turned up some draft planning documents. In particular, I understand that
> there is a preference to toggle the cache. What does application code have
> to do in order t
I want to highlight why Its also an important change - there is a very real
and important error: your channel content is truncated. Its a bug that
necko doesn't tell you about that right now. So we're going to fix that up.
The download manager is the obvious victim of this right now. It declares
s
an obvious tie in here is the network predictor (formerly 'seer') work Nick
Hurley has been doing. Basically already working on the "what to fetch
next" questions, but not the rendering parts.
On Mon, Aug 11, 2014 at 6:40 PM, Karl Dubost wrote:
>
> Le 12 août 2014 à 07:03, Jonas Sicking a écri
On Mon, Aug 25, 2014 at 3:03 AM, Justin Dolske wrote:
> I think it would make a lot of sense to have an explicit "low bandwidth
> mode" that did stuff like this, instead of trying to address it piecemeal.
> There's all kinds of stuff that can consume bandwidth, and if we think it's
> a real conce
On Fri, Sep 12, 2014 at 1:55 AM, Henri Sivonen wrote:
> tion to https
> that obtaining, provisioning and replacing certificates is too
> expensive.
>
Related concepts are at the core of why I'm going to give Opportunistic
Security a try with http/2. The issues you cite are real issues in
practic
content format negotiation is what accept is meant to do. Protocol level
negotiation also allows designated intermediaries to potentially transcode
between formats. imo you should add woff2 to the accept header.
On Tue, Oct 7, 2014 at 9:39 AM, Henri Sivonen wrote:
> On Fri, Oct 3, 2014 at 3:11 A
On Wed, Oct 8, 2014 at 6:10 AM, Gervase Markham wrote:
> On 07/10/14 14:53, Patrick McManus wrote:
> > content format negotiation is what accept is meant to do. Protocol level
> > negotiation also allows designated intermediaries to potentially
> transcode
> > between for
On Wed, Oct 8, 2014 at 11:18 AM, Jonathan Kew wrote:
>
> So the "negotiation" is handled within the browser, on the basis of the
> information provided in the CSS stylesheet, *prior* to sending any request
> for an actual font resource.
>
>
I'm not advocating that we don't do the css bits too. Th
On Wed, Oct 8, 2014 at 11:44 AM, Anne van Kesteren wrote:
> On Wed, Oct 8, 2014 at 5:34 PM, Patrick McManus
> wrote:
> > intermediaries, as I mentioned before, are a big reason. It provides an
> > opt-in opportunity for transcoding where appropriate (and I'm not
> cl
On Wed, Oct 8, 2014 at 12:03 PM, Jonathan Kew wrote:
> Possible in theory, I guess; unlikely in practice. The compression
> algorithm used in WOFF2 is extremely asymmetrical, offering fast decoding
> but at the cost of slow encoding. The intent is that a large library like
> Google Fonts can pre-
> OK. So it can work if every browser that supports the format puts in in
> Accept: as soon as it begins support. That may be true of WebP; I don't
> believe it's true of WOFF. Is it?
>
>
you need to opt-in to the transcoding, yes. But you make it sound like you
can't use woff at all without transc
I use git day to day. I use hg primarily for landing code and "hg bzepxort".
On Fri, Oct 31, 2014 at 1:48 AM, Gregory Szorc wrote:
> I
> I'm interested in knowing how people feel about these "hidden hg" tools.
> Is going through a hidden, local hg bridge seamless? Satisfactory? Barely
> tolerabl
I haven't really waded into this iteration of the discussion because there
isn't really new information to talk about. But I know everyone is acting
in good faith so I'll offer my pov again. We're all trying to serve our
users and the Internet - same team :)
OE means ciphertext is the new plaintex
On Thu, Nov 13, 2014 at 11:16 PM, Henri Sivonen
wrote:
> The part that's hard to accept is: Why is the countermeasure
> considered effective for attacks like these, when the level of how
> "active" the MITM needs to be to foil the countermeasure (by
> inhibiting the upgrade by messing with the in
On Wed, Nov 19, 2014 at 1:45 AM, Henri Sivonen wrote:
>
> Does Akamai's logo appearing on the Let's Encrypt announcements change
> Akamai's need for OE? (Seems *really* weird if not.)
>
let's encrypt is awesome - more https is awesome.
The availability of let's encrypt (or something like it) wa
Hi -
On Fri, Nov 21, 2014 at 5:41 AM, Henri Sivonen wrote:
>
> Indeed. Huge thanks to everyone who is making Let's Encrypt happen.
>
> > regulatory compliance,
>
> What's this about?
>
nosslsearch.google.com is an example of the weight of regulatory compliance
in action. Google talks loudly abo
On Fri, Nov 21, 2014 at 10:09 AM, Anne van Kesteren
wrote:
> On Fri, Nov 21, 2014 at 3:53 PM, Patrick McManus
> wrote:
> > in action. Google talks loudly about all https (and has the leading track
> > record), yet there it is. And google isn't special in that regard
Hi Anne,
On Tue, Nov 25, 2014 at 9:13 AM, Anne van Kesteren wrote:
>
> > They are doing this with opportunistic encryption (via the
> > Alternate-Protocol response header) for http:// over QUIC from chrome.
> In
> >
>
> Or are you saying that
> because Google experiments with OE in QUIC, inclu
I have a slight twist in thinking to offer on the topic of persistent
permissions.. part of this falls to the level of spitballing so forgive the
imprecision:
Restricting persistent permissions is essentially about cache poisoning
attacks. The assumptions seem to be that
a] https is not vulnerable
thanks bkelly
On Thu, Mar 26, 2015 at 9:01 AM, Benjamin Kelly wrote:
> Actually, I'm going to steal bug 990804 and see if we can get something
> worked out now. My plan is just to duplicate the STS code with a different
> XPCOM uuid for now.
>
> On Thu, Mar 26, 2015 at 9:29 AM, Benjamin Kelly
:
> On Thu, Mar 26, 2015 at 2:46 AM, Randell Jesup
> wrote:
>
> t. (I even thought
> there was a separate SocketTransportService which was different from
> StreamTransportService.)
>
>
You're right they are different things.
The socket transport service is a single thread that does most of th
2015-03-26 11:00 AM, Kyle Huey wrote:
>
>> On Thu, Mar 26, 2015 at 7:49 AM, Patrick McManus
>> wrote:
>>
>> Is this thread mostly just confusion from these things sounding so much
>>> alike? Or am I confused now?
>>>
>>>
>> Most likel
media uses it by agreement and in an appropriate way to support rtcweb.
On Thu, Mar 26, 2015 at 10:20 AM, Kyle Huey wrote:
> Can we stop exposing the socket transport service's nsIEventTarget outside
> of Necko?
>
> - Kyle
>
>
> On Thu, Mar 26, 2015 at 8:14 AM
it sounds like overbite is using it as intended.
On Fri, Apr 3, 2015 at 2:19 PM, Cameron Kaiser wrote:
> On 3/26/15 8:37 AM, Randell Jesup wrote:
>
>> Can we stop exposing the socket transport service's nsIEventTarget outside
>>> of Necko?
>>>
>>
>> If we move media/mtransport to necko... or mak
On Wed, Apr 15, 2015 at 10:03 AM, wrote:
> rather than let webmasters make their own decisions.
I firmly disagree with your conclusion, but I think you have identified the
central property that is changing.
Traditionally transport security has been a unilateral decision of the
content provid
On Fri, May 1, 2015 at 2:07 PM, wrote:
> Why encrypt (and slow down) EVERYTHING
I think this is largely outdated thinking. You can do TLS fast, and with
low overhead. Even on the biggest and most latency sensitive sites in the
world. https://istlsfastyet.com
> when most web content isn't wort
On Fri, May 22, 2015 at 4:11 PM, Eric Rescorla wrote:
> I think it's generally valuable to have a trace level for all
> networking-type things.
>
> Having some separate mechanism seems like the more complicated thing.
>
+1 - I actually wasn't aware of this debug+1 mechanism and now that I am I
On Tue, Jul 21, 2015 at 5:01 PM, Honza Bambas wrote:
> The main offenders here are:
> - synchronous "on-*-request" global notifications
>
I believe this is mostly what :sicking refers to when he talks about
[1] https://etherpad.mozilla.org/BetterNeckoSecurityHooks
and I agree that would be usef
On a high level - try is a great tool and we want to make tools available
to people when they are helpful to them. That includes parallelism, which
is an important part of efficient bug hunting.
My inclination is that policy and bureaucracy are exactly the wrong
mechanisms to put around a producti
61 matches
Mail list logo