Re: A reminder about MOZ_MUST_USE and [must_use]

2017-01-20 Thread Ted Mielczarek
On Thu, Jan 19, 2017, at 07:00 PM, gsquel...@mozilla.com wrote:
> > I think the point is that it's not obvious that "must check the return
> > value" is a sufficiently-dominant common case for arbitrary return values.
> > FWIW, Rust took the [must_use] rather than [can_ignore] approach too.
> 
> That's unfortunate. But real-world data must trump my idealism in the
> end. :-)

The Rust case is helped by the fact that `Result` is the defacto type
for returning success or error, and it's effectively `must_use`. We
don't have a similar default convention in C++.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about MOZ_MUST_USE and [must_use]

2017-01-20 Thread Ted Mielczarek
On Fri, Jan 20, 2017, at 08:19 AM, Nicolas B. Pierron wrote:
> > The Rust case is helped by the fact that `Result` is the defacto type
> > for returning success or error, and it's effectively `must_use`. We
> > don't have a similar default convention in C++.
> 
> We have
> 
> http://searchfox.org/mozilla-central/rev/30fcf167af036aeddf322de44a2fadd370acfd2f/mfbt/Result.h#173
> 
> we just have to make it the default convention now.

Yes, and this is great, I just meant that in Rust 99% of code that's
returning success/failure is using `Result` because it's in the standard
library, whereas in C++ there's not an equivalent. `mozilla::Result` is
great and I hope we can convert lots of Gecko code to use it, but we
have *a lot* of Gecko code that's not already there.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Decision owner for Rust usage in Gecko.

2017-02-07 Thread Ted Mielczarek
On Tue, Feb 7, 2017, at 03:50 PM, Johnny Stenback wrote:
> Hey all,
> 
> Over the coming weeks/months/years we'll be adding more and more Rust
> code
> into Gecko. As that work progresses (it's already in full swing in case
> you
> haven't been paying attention) it'll become more and more important that
> we
> collectively help ensure that we're being intentional in how this all
> rolls
> into the code base. We want to ensure we end up with good ergonomics,
> consistency, etc. IOW, we want the best possible end result out of this.
> To
> help make that happen we now have a single decision maker for things
> relating to Rust. That person is Ehsan Akhgari, who is also the module
> owner of the recently created "C++/Rust usage, tools, and style" module
> [1].
> 
> IOW, going forward if you're working on using Rust code in more places,
> doing build system changes around Rust (compiler versions, shared crates,
> locations, etc), please get Ehsan to sign off on such changes.

This is good to hear! We were just discussing some issues around
vendoring in the Quantum/tools meeting today. Specifically we're
planning to try implementing my proposal[1] around reviews for vendored
Rust crates. We think it will probably be best served by having a group
of peers who are comfortable reviewing Rust code, for when we're
vendoring new crates (or there are significant changes to existing
vendored crates).

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1322798#c11
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


FYI: We've forked the Breakpad client code

2017-02-09 Thread Ted Mielczarek
FYI, I landed a patch[1] yesterday that forked the Breakpad client code.
Everything that was in toolkit/crashreporter/google-breakpad/src/client
is now in toolkit/crashreporter/breakpad-client. Google has switched
Chromium to using Crashpad (their new crash reporting library) on
Windows, OS X and Android, so that code is effectively unmaintained in
Breakpad. The Linux client gets some fixes but they are mostly ChromeOS
related, so I don't expect us to be missing much. We've had a number of
changes recently that either needed invasive changes or API weirdness to
upstream, and since nobody else is maintaining that code we might as
well just have our own copy to change as we see fit. This means that you
can now land changes to the client-side exception handling and minidump
writing code with just the normal Mozilla review process, without
upstreaming anything to Breakpad.

We'll still be using the tools (for dump_syms) and processor (for
client-side stackwalking) parts of Breakpad from upstream, since those
are actively maintained (for the time being).

There is a bug open on looking into moving us to use Crashpad. I don't
know what the benefits would be, aside from using code that's actually
maintained, but it's likely to be a significant amount of work. If I had
to choose I'd be more likely to spend the effort writing a replacement
in Rust instead.

-Ted 

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1336548
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: We've forked the Breakpad client code

2017-02-09 Thread Ted Mielczarek
On Thu, Feb 9, 2017, at 02:47 PM, Aaron Klotz wrote:
> This is great news, Ted!
> 
> Are you going to be creating a module for this? Who are the peers?

I don't think a new module is necessary, we've covered the existing
integration code (nsExceptionHandler.cpp etc) under the Toolkit module
for a long time and I think it's been OK. If it becomes a problem we can
certainly reevaluate. There aren't a lot of people that are comfortable
reviewing this code, but that's not exactly a unique situation in Gecko.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Doxygen output?

2017-02-21 Thread Ted Mielczarek
We have auto-generated docs using Sphinx on ReadTheDocs[1]. If someone
was motivated, it looks like there does exist code[2] to bridge doxygen
docs into Sphinx, so it should be possible to get those docs into the
existing RTD setup. There are even docs on RTD[3] for how to add new
docs!

-Ted

1. http://gecko.readthedocs.io/en/latest/
2. https://breathe.readthedocs.io/en/latest/
3. http://gecko.readthedocs.io/en/latest/#adding-documentation


On Mon, Feb 20, 2017, at 11:38 AM, Milan Sreckovic wrote:
> Not being kept up to date as far as I know.  My extraction is four years 
> out of date (e.g., 
> https://people-mozilla.org/~msreckovic/Extracted/MozillaCentral/html/annotated.html)
>  
> and as you noted, Benoit's page is no longer.
> 
> The code used to create it is here: 
> https://github.com/bgirard/doxygen-mozilla
> 
> 
> On 20-Feb-17 2:05, Henri Sivonen wrote:
> > Our comments mostly try to follow the Doxygen format, and MDN says
> > that the documentation team has a tool for importing Doxygen-formatted
> > IDL comments into MDN articles.
> >
> > Other than that, is Doxygen output from m-c input being published anywhere?
> >
> > https://people-mozilla.org/~bgirard/doxygen/gfx/ is 404 these days.
> >
> 
> -- 
> - Milan (mi...@mozilla.com)
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should cheddar-generated headers be checked in?

2017-02-22 Thread Ted Mielczarek
On Wed, Feb 22, 2017, at 07:11 AM, Henri Sivonen wrote:
> Looking at mp4parse, the C header is generated:
> https://searchfox.org/mozilla-central/source/media/libstagefright/binding/mp4parse_capi/build.rs
> But also checked in:
> https://searchfox.org/mozilla-central/source/media/libstagefright/binding/include/mp4parse.h
> 
> Is this the best current practice that I should follow with encoding_rs?
> 
> See also:
> https://users.rust-lang.org/t/how-to-retrieve-h-files-from-dependencies-into-top-level-crates-target/9488
> (unanswered at the moment)

I don't think we have a best practice for this currently. We hit the
opposite issue with bindgen, and I've been informed that we need to run
bindgen at build time because the bindings are ABI-specific. Given that
the C API here is under your complete control, it seems like it's
possible to generate a cross-platform header that doesn't have those
issues, so you could certainly check it in. The only question there is
how much hassle it will be for you to maintain a checked-in copy.

Alternately you could just generate it at build time, and we could pass
the path to $(DIST)/include in a special environment variable so you
could put the header in the right place.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should cheddar-generated headers be checked in?

2017-02-23 Thread Ted Mielczarek
On Thu, Feb 23, 2017, at 08:27 AM, Nathan Froyd wrote:
> On Thu, Feb 23, 2017 at 1:25 AM, Henri Sivonen 
> wrote:
> >> Alternately you could just generate it at build time, and we could pass
> >> the path to $(DIST)/include in a special environment variable so you
> >> could put the header in the right place.
> >
> > So just https://doc.rust-lang.org/std/env/fn.var.html in build.rs? Any
> > naming conventions for the special variable? (I'm inferring from the
> > way you said it that DIST itself isn't being passed to the build.rs
> > process. Right?)
> 
> We already pass MOZ_DIST as $(DIST)/include, fwiw:
> 
> http://dxr.mozilla.org/mozilla-central/source/config/rules.mk#941

n.b.: that shows us passing `MOZ_DIST=$(ABS_DIST)`, so you could use
`MOZ_DIST/include` in a Cargo build script.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should cheddar-generated headers be checked in?

2017-02-23 Thread Ted Mielczarek
On Thu, Feb 23, 2017, at 06:40 AM, Emilio Cobos Álvarez wrote:
> On Thu, Feb 23, 2017 at 08:25:30AM +0200, Henri Sivonen wrote:
> > On Wed, Feb 22, 2017 at 5:49 PM, Ted Mielczarek  wrote:
> > > Given that
> > > the C API here is under your complete control, it seems like it's
> > > possible to generate a cross-platform header
> > 
> > I believe the header is cross-platform, yes.
> > 
> > > Alternately you could just generate it at build time, and we could pass
> > > the path to $(DIST)/include in a special environment variable so you
> > > could put the header in the right place.
> > 
> > So just https://doc.rust-lang.org/std/env/fn.var.html in build.rs? Any
> > naming conventions for the special variable? (I'm inferring from the
> > way you said it that DIST itself isn't being passed to the build.rs
> > process. Right?)
> 
> FWIW, in Stylo we use MOZ_DIST[1], which is passed to the build script,
> not sure if it's stylo only though.
> 
> [1]:
> https://searchfox.org/mozilla-central/rev/b1044cf7c2000c3e75e8181e893236a940c8b6d2/servo/components/style/build_gecko.rs#48

So if you're only concerned about it working in Gecko--there you go! I'm
not aware that there's any better convention for this in Rust in the
general sense. Working with C code in projects built by Cargo seems to
be still fairly new territory. There are plenty of crates that build C
libraries and expose their API, and there are plenty of crates that
expose a C API from Rust code (many of which I'm sure use rusty-cheddar
to generate a header file), but I'm not sure there are strong
conventions for "this crate builds C code that relies on C code from
another crate".

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Editing vendored crates

2017-02-27 Thread Ted Mielczarek
On Mon, Feb 27, 2017, at 12:32 PM, Henri Sivonen wrote:
> On Mon, Feb 27, 2017 at 7:04 PM, Ralph Giles  wrote:
> > On Mon, Feb 27, 2017 at 4:03 AM, Henri Sivonen  wrote:
> >
> >> I find this level of difficulty (self-inflicted quasi-Tivoization
> >> practically) an unreasonable impediment to practicing trivial Software
> >> Freedom with respect to the vendored crates.
> >
> > I agree we need to fix the ergonomics here, but I don't think you
> > should be so hard on cargo.
> 
> Sorry about the tone. I'm rather frustrated at how hard it is to do
> something that should be absolutely trivial (adding a local diagnostic
> panic!()/println!()).
> 
> > The hash checking is designed to make
> > builds more reproducible, so that unless we make an explicit diversion
> > we know we're building with the same source as every other use of that
> > same package version. This has benefits for security, debugging, and
> > change control.
> 
> We don't seem to need such change control beyond hg logs for e.g. the
> in-tree ICU or Skia, though.

As someone who has maintained a vendored upstream C++ project (Breakpad)
for a decade, I can say that this causes us headaches *all the time*. We
are constantly landing local changes to vendored projects and not
keeping track of them and then either losing patches or dealing with
conflicts or breakage when we update from upstream.

I'm sorry this is causing you pain, and we should figure out a way to
make it less painful, but note that the intention is that things in
`third_party/rust` should be actual third-party code, not things under
active development by Mozilla. We don't currently have a great middle
ground between "mozilla-central is the repository of record for this
crate" and "this crate is vendored from crates.io". We're finding our
way there with Servo, so we might have a better story for things like
encoding-rs when we get that working well. I understand that there are
lots of benefits to developing a crate in a standalone GitHub
repository, and you're certainly not the only one who wants to do that,
but it does make the integration story harder. It's very hard to support
code that has its repository of record somewhere other than
mozilla-central, but also have a simple workflow for making changes to
that code in mozilla-central along with other code.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-09 Thread Ted Mielczarek
On Wed, Mar 8, 2017, at 05:43 PM, Ehsan Akhgari wrote:
> On 2017-03-08 11:31 AM, Simon Sapin wrote:
> > On 08/03/17 15:24, Ehsan Akhgari wrote:
> >> What we did in the Toronto office was walked to people who ran Linux on
> >> their desktop machines and installed the icecream server on their
> >> computer.  I suggest you do the same in London.  There is no need to
> >> wait for dedicated build machines.  ;-)
> > 
> > We’ve just started doing that in the Paris office.
> > 
> > Just a few machines seem to be enough to get to the point of diminishing
> > returns. Does that sound right?
> 
> I doubt it...  At one point I personally managed to saturate 80 or so
> cores across a number of build slaves at the office here.  (My personal
> setup has been broken so unfortunately I have been building like a
> turtle for a while now myself...)

A quick check on my local objdir shows that we have ~1800 source files
that get compiled during the build:
$ find /build/debug-mozilla-central/ -name backend.mk -o -name
ipdlsrcs.mk -o -name webidlsrcs.mk | xargs grep CPPSRCS | grep -vF
'CPPSRCS += $(UNIFIED_CPPSRCS)' | cut -f3- -d' ' | tr ' ' '\n' | wc -l
1809

That's the count of actual files that will be passed to the C++
compiler. The build system is very good at parallelizing the compile
tier nowadays, so you should be able to scale the compile tier up to
nearly that many cores. There's still some overhead in the non-compile
tiers, but if you are running `mach build binaries` it shouldn't matter.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-03-17 Thread Ted Mielczarek
On Fri, Mar 17, 2017, at 01:12 PM, Chris Peterson wrote:
> On 3/17/2017 1:45 AM, Honza Bambas wrote:
> > I have a very similar setup, with even way more exceptions added, but
> > none of them has the desired effect. Unfortunately, the only way to make
> > MsMpEng shut up is to disable run-time protection completely for the
> > time of the build. I think it's a bug in Defender.
> 
> Maybe `mach build` can temporarily disable Defender when building?

You can't programmatically control Windows Defender, even as an
Administrator. This is a security precaution from Microsoft. It's
configured with a special user account. I looked into this recently
because I thought it would be nice if *something* in the build system or
bootstrap could at least let you know if your build directories were not
in the list of exclusions.

Back to the original topic, I recently set up a fresh Windows machine
and I followed the same basic steps (enable performance power mode,
whitelist a bunch of stuff in Windows Defender) and my build seemed
basically CPU-bound[1] during the compile tier. Disabling realtime
protection in Defender made it *slightly* better[2] but didn't have a
large impact on the overall build time (something like 20s out of ~14m
total for a clobber).

Ideally we should have this stuff as part of `mach bootstrap` or similar
so everyone gets their machine configured properly for the fastest
builds possible.

Related, my next steps were that I was planning to figure out how to
gather an xperf profile of the entire build process to see if there were
any obvious speedups left from a system perspective (the resource usage
graph shows the obvious inefficiencies left that are already known:
configure + the non-compile tiers), but UIforETW hung when I tried to
use it to do so and I haven't followed up yet.

-Ted

1. http://people.mozilla.org/~tmielczarek/build-resource-usage.svg
2.
https://people-mozilla.org/~tmielczarek/build-resource-usage-no-defender.svg
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-03-17 Thread Ted Mielczarek
On Fri, Mar 17, 2017, at 02:20 PM, Ben Kelly wrote:
> On Fri, Mar 17, 2017 at 1:36 PM, Ted Mielczarek 
> wrote:
> 
> > Back to the original topic, I recently set up a fresh Windows machine
> > and I followed the same basic steps (enable performance power mode,
> > whitelist a bunch of stuff in Windows Defender) and my build seemed
> > basically CPU-bound[1] during the compile tier. Disabling realtime
> > protection in Defender made it *slightly* better[2] but didn't have a
> > large impact on the overall build time (something like 20s out of ~14m
> > total for a clobber).
> >
> 
> The 14min measurement must have been for a partial build.  With defender
> disabled the best I can get is 18min.  This is on one of the new lenovo
> p710 machines with 16 xeon cores.

Nope, full clobber builds: `./mach clobber; time ./mach build`. (I have
the same machine, FWIW.) The svg files I uploaded were from `mach
resource-usage`, which has nice output but not a good way to share the
resulting data externally. I didn't save the actual output of `time`
anywhere, but going back through my IRC logs the first build I did on
the machine took 15:08.01, the second (where all the source files ought
to be in the filesystem cache) took 14:58.24, and then another build I
did with Defender's real-time indexing disabled took 14:27.73. We should
figure out what the difference is between our system configurations,
3-3.5 mins is a good chunk of time to be leaving on the table!
Similarly, I heard from someone (I can't remember who it was) that said
they could do a Linux Firefox build in ~8(?) minutes on the same
hardware. (I will try to track down the source of that number.) That
gives us a fair lower-bound to shoot for, I think.

> I definitely observed periods where it was not CPU bound.  For example,
> at
> the end of the js lib build I observed a single cl.exe process sit for ~2
> minutes while no other work was being done.  I also saw link.exe take a
> long time without parallelism, but i think that's a known issue.

Yeah, I specifically meant "CPU-bound during the compile tier", where we
compile all the C++ code. If you look at the resource usage graphs I
posted it's pretty apparent where that is (the full `mach
resource-usage` HTML page has a nicer breakdown of tiers). The stuff
before and after compile is not as good, and the tail end of compile
gets hung up on some long-pole files, but otherwise it does a pretty
good job of saturating available CPU. I also manually monitored disk and
memory usage during the second build, and didn't see much there. The
disk usage showed ~5% active time, presumably mostly the compiler
generating output, and memory usage seemed to be stable at around 9GB
for most of the build (I didn't watch during libxul linking, I wouldn't
be surprised if it spikes then).

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Third Party Library Alert Service

2017-03-17 Thread Ted Mielczarek
On Fri, Mar 17, 2017, at 02:40 PM, trit...@mozilla.com wrote:
> On Friday, March 17, 2017 at 1:35:15 PM UTC-5, Sylvestre Ledru wrote:
> > Looks like we are duplicating some contents and efforts with:
> > https://dxr.mozilla.org/mozilla-central/source/tools/rewriting/ThirdPartyPaths.txt
> > Any plan to "merge" them?
> 
> There is now! (Or, well, there will be one.) =)
> 
> If anyone makes use of this file beyond it just serving as a reference,
> or if there is tooling around this file please talk tell me about it!

We've discussed having a better way to annotate third-party libraries in
the tree many times before, but never made any real progress. There are
lots of reasons to want that info--ignoring those directories for
automated rewrites (per Sylvestre's link) or some kinds of lint
checking, tracking upstream fixes, making it easier to update our
vendored copies of code in a consistent way across projects, etc. I
wrote up a proposal not long ago[1] that covered some related things, it
hasn't gone anywhere but you might find it interesting. Specifically one
thing I'd love to see us do is something Chromium does--they have "FYI
bots" that will do try builds against the latest versions of their
dependencies, and a bot that will submit patches that can be landed when
those FYI builds are green, so they can easily keep up-to-date when
upstream updates don't break the build.

-Ted

1.
https://docs.google.com/document/d/1yjGTR2io97p-7ztArXl2tFbmCkZItqj9kWUJJYAdekw/edit?usp=sharing
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-03-17 Thread Ted Mielczarek
On Fri, Mar 17, 2017, at 03:16 PM, Ben Kelly wrote:

> On Fri, Mar 17, 2017 at 2:52 PM, Ben Kelly  wrote:
>> On Fri, Mar 17, 2017 at 2:43 PM, Ted Mielczarek
>>  wrote:
>> 

>> 

>> 

>> 

>>
>>> Yeah, I specifically meant "CPU-bound during the compile tier",
>>> where we compile all the C++ code. If you look at the resource usage
>>> graphs I posted it's pretty apparent where that is (the full `mach
>>> resource-usage` HTML page has a nicer breakdown of tiers). The stuff
>>> before and after compile is not as good, and the tail end of compile
>>> gets hung up on some long-pole files, but otherwise it does a pretty
>>> good job of saturating available CPU. I also manually monitored disk
>>> and memory usage during the second build, and didn't see much there.
>>> The disk usage showed ~5% active time, presumably mostly the
>>> compiler generating output, and memory usage seemed to be stable at
>>> around 9GB for most of the build (I didn't watch during libxul
>>> linking, I wouldn't be surprised if it spikes then).
>> 

>> 

>> That "long pole file" at the end of the js lib is over 10% of my
>> compile time.  That's not very good parallelism in the compile
>> stage IMO.
> 

> This is the part of the build I'm talking about:

> 

> 15:17.80 Unified_cpp_js_src8.cpp

> 15:17.90 Unified_cpp_js_src38.cpp

> 15:18.33 Unified_cpp_js_src40.cpp

> 15:19.96 Unified_cpp_js_src41.cpp

> 15:21.41 Unified_cpp_js_src9.cpp

> 16:59.13 Interpreter.cpp

> 16:59.15 js_static.lib

> 16:59.99 module.res

> 17:00.04 Creating Resource file: module.res

> 17:00.81 StaticXULComponentsStart.cpp

> 17:00.99 nsDllMain.cpp

> For the 1:38 between Unified_cpp_js_src9.cpp and Interpreter.cpp only
> a single cl.exe process is running.  I guess thats closer to 8% of the
> total build time.  Still seems very weird to me.


Yeah, the JS engine uses a lot more complex C++ features than the rest
of the code in our tree, so it takes longer to compile. This is also why
the `FILES_PER_UNIFIED_FILE` setting is lower in js/src than the rest of
the tree. We do try to build js/src pretty early in the build, although
the exact workings of the compile tier is not something I currently
understand. One thing we could try here would be to hack up some
instrumentation to record the time taken to compile each source file,
which would let us determine if we need to tweak
`FILES_PER_UNIFIED_FILE` lower, at least.


-Ted


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-03-17 Thread Ted Mielczarek
On Fri, Mar 17, 2017, at 02:43 PM, Ted Mielczarek wrote:
> Similarly, I heard from someone (I can't remember who it was) that said
> they could do a Linux Firefox build in ~8(?) minutes on the same
> hardware. (I will try to track down the source of that number.) That
> gives us a fair lower-bound to shoot for, I think.

Aha, it was ttaubert, and it was on Twitter:
https://twitter.com/ttaubert/status/838790894937595904

10 minute clobber builds on the same hardware on Linux, so honestly 14
minutes seems very reasonable to me, although obviously making it faster
would be nice.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-24 Thread Ted Mielczarek
On Fri, Mar 24, 2017, at 12:10 AM, Jeff Muizelaar wrote:
> I have a Ryzen 7 1800 X and it does a Windows clobber builds in ~20min
> (3 min of that is configure which seems higher than what I've seen on
> other machines). This compares pretty favorably to the Lenovo p710
> machines that people are getting which do 18min clobber builds and
> cost more than twice the price.

Just as a data point, I have one of those Lenovo P710 machines and I get
14-15 minute clobber builds on Windows.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


FYI: Visual C++ 2017 build system support landing

2017-04-25 Thread Ted Mielczarek
I'm about to land some patches[1] that will allow configure to detect a
Visual C++ 2017 installation. You should be able to launch a
MozillaBuild `start-shell.bat` shell and build without having to have
the Visual C++ environment configured. The only thing that will change
from the current state of affairs is that if you have both VC2015 and
VC2017 installed, and you're building from a start-shell shell (not
start-shell-msvc2015) configure will now default to using VC2017 instead
of VC2015. I've added a new configure option you can use to select the
compiler version you want in this situation, so you can add:

ac_add_options --with-visual-studio-version=2015

to your mozconfig to tell configure to continue to use VC2015 when it
autodetects a compiler for you. Do note that VC2017 is not currently
built in CI[2], so it's likely to get accidentally broken until we get
that fixed.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1318143
2. https://bugzilla.mozilla.org/show_bug.cgi?id=1318193
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Editing vendored crates take #2

2017-05-02 Thread Ted Mielczarek
On Tue, May 2, 2017, at 02:54 PM, Josh Matthews wrote:
> On 2017-04-28 3:07 PM, Boris Zbarsky wrote:
> > On 4/28/17 1:05 PM, Josh Matthews wrote:
> > 2)  Run "cargo vendor" and watch it fail because of something I never
> > figured out.
> 
> My cargo from April 19 claims that "cargo vendor" isn't a real command. 
> Did you mean `./mach vendor rust` (which did end up deleting the whole 
> directory for me)?

FYI you need to `cargo install cargo-vendor` for `cargo vendor` to work.
`./mach vendor rust` will do that for you if it's not installed.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: CodeCoverage Monthly Update

2017-05-09 Thread Ted Mielczarek
On Tue, May 9, 2017, at 05:48 AM, Henri Sivonen wrote:
> On Thu, Apr 6, 2017 at 6:26 AM, Kyle Lahnakoski 
> wrote:
> > * Getting Rust to emit coverage artifacts is important:
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1335518
> 
> Is there a plan to factor "cargo test" of individual vendored crates
> into the coverage of Rust code? For example, for encoding_rs, I was
> thinking of testing mainly the C++ integration as an
> mozilla-central-specific gtest and leaving the testing of the crate
> internals to the crate's standalone "cargo test".

Note that we're not currently running `cargo test` tests for anything:
https://bugzilla.mozilla.org/show_bug.cgi?id=1331022

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Have you run 'mach bootstrap' lately?

2017-05-12 Thread Ted Mielczarek
On Fri, May 12, 2017, at 10:45 AM, Sylvestre Ledru wrote:
> Would it be possible to add a check like:
> "You haven't updated your local configuration since XX days, please
> consider running
> mach bootstrap ?"

We've had mach produce nag messages like that in the past and they have
been universally disliked, FWIW.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linux builds now default to -O2 instead of -Os

2017-06-01 Thread Ted Mielczarek
On Thu, Jun 1, 2017, at 09:23 PM, Mike Hommey wrote:
> On Thu, Jun 01, 2017 at 09:09:44PM -0400, Nathan Froyd wrote:
> > Could we try to make a bare --enable-optimize --enable-debug build use
> > -Og if it was available?
> 
> It might make sense, but we need to be careful that this will affect the
> debug builds we produce on automation.

There's not much benefit to changing automation, TBH. It's kind of a
pain to debug those builds anyway because we strip them and it requires
manual work to get the debug symbols in a place where you can use them.
If we made such a change for developer builds we could make the
automation builds forcibly `--enable-optimize=whatever`.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling nightlies on Mac - what tools are used?

2017-06-20 Thread Ted Mielczarek
On Tue, Jun 20, 2017, at 03:59 AM, Julian Seward wrote:
> I've used VTune on Linux and have some idea what it can and can't do.
> I have tried it on Mac, but my impression, from the Intel web site, is
> that it is at least available for Mac.

Apparently the version for Mac is just a GUI for viewing results, per[1]
"An optional OS X host interface can be downloaded separately to analyze
data collected on other targets. An OS X collector to profile on OS X is
not currently available."

-Ted

1. https://software.intel.com/en-us/intel-vtune-amplifier-xe
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: switch to macosx cross-compiled builds on taskcluster on trunk

2017-06-22 Thread Ted Mielczarek
On Thu, Jun 22, 2017, at 01:08 AM, Ralph Giles wrote:
> On Wed, Jun 21, 2017 at 9:47 PM, Randell Jesup 
> wrote:
> 
> 
> > Does this have affect on our still using the 10.7 Mac SDK?
> 
> 
> We are still building against the macOS 10.7 SDK, but we can update to
> 10.9
> once we've confirmed transition away from the 10.7 builders.

I uploaded the 10.12 SDK to tooltool a few months ago[1]. We needed to
update the linker we were using before we could switch, but that may
have already happened.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1324892#c5
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-17 Thread Ted Mielczarek
On Wed, Jul 12, 2017, at 07:41 PM, Mike Hommey wrote:
> On Wed, Jul 12, 2017 at 04:06:39PM -0700, Eric Rahm wrote:
> > Interesting points.
> > 
> >- *using breakpad* - was the problem that creating wrappers to access
> >the c/c++ code was too tedious? Could bindgen help with that, if not it
> >would be interesting gather some details about why it wouldn't work and
> >file bugs against it.
> >- *pingsender* - was something like https://hyper.rs/ not around when
> >you were working on it or is this a case of finding the things you want 
> > can
> >be difficult in rust-land? Either way it might be a good idea for us to 
> > put
> >together a list of "sanctioned" crates for various tasks.
> 
> Note that pingsender is a small self-contained binary. I'm afraid writing
> in
> in rust would make it very much larger.

While this is probably true, it's also a best-case scenario for writing
a component in Rust: it doesn't have to call into *any* Gecko C++ code.
We also have zero visibility into failures in helper programs like this,
so the stability gains we get from using Rust instead of C++ are
outsized in this case.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-17 Thread Ted Mielczarek
Nick,

Thanks for kicking off this discussion! I felt like a broken record
talking to people about this in SF. From my perspective Rust is our
single-biggest competitive advantage for shipping Firefox, and every
time we choose C++ over Rust we throw that away. We know the costs of
shipping complicated C++ code: countless hours of engineering time spent
chasing down hard-to-reproduce crashes, exploitable security holes, and 
threading issues. Organizationally we need to get to a point where every
engineer has the tools and training they need to make Rust their first
choice for writing code that ships with Firefox.

On Mon, Jul 10, 2017, at 09:15 PM, Bobby Holley wrote:
> I think this is pretty uncontroversial. The high-level strategic decision
> to bet on Rust has already been made, and the cost of depending on the
> language is already sunk. Now that we're past that point, I haven't heard
> anyone arguing why we shouldn't opt for memory safety when writing new
> standalone code. If there are people out there who disagree and think
> they
> have the arguments/clout/allies to make the case, please speak up.

>From my anecdotal experiences, I've heard two similar refrains:
1) "I haven't learned Rust well enough to feel confident choosing it for
this code."
2) "I don't want to spend time learning Rust that I could be spending
just writing the code in C++."

I believe that every developer that writes C++ at Mozilla should be
given access to enough Rust training and work hours to spend learning it
beyond the training so that we can eliminate case #1. With the Rust
training sessions at prior All-Hands and self-motivated learning, I
think we've pretty well saturated the group of early adopters. These
people are actively writing new Rust code. We need to at least get the
people that want to learn Rust but don't feel like they've had time to
that same place.

For case #2, there will always be people that don't want to learn new
languages, and I'm sympathetic to their perspective. Learning Rust well
does take a large investment of time. I don't know that I would go down
the road of making Rust training mandatory (yet), but we are quickly
going to hit a point where "I don't feel like learning Rust" is not
going to cut it anymore. I would hope that by that point we will have
trained everyone well enough that case #2 no longer exists, but if not
we will have to make harder choices.

 
> The tradeoffs come when the code is less standalone, and we need to weigh
> the integration costs. This gets into questions like whether/how Rust
> code
> should integrate into the cycle collector or into JS object reflection,
> which is very much a technical decision that should be made by experts. I
> have a decent sense of who some of those experts might be, and would like
> to find the most lightweight mechanism for them to steer this ship.

We definitely need to figure out an ergonomic solution for writing core
DOM components in Rust, but I agree that this needs a fair bit of work
to be feasible. Most of the situations I've seen recently were not that
tightly integrated into Gecko.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


sccache as ccache

2017-07-26 Thread Ted Mielczarek
Yesterday I published sccache 0.2 to crates.io, so you can now `cargo
install sccache` and get the latest version (it'll install to
~/.cargo/bin). If you build Firefox on Linux or OS X you can (and
should) use sccache in place of ccache for local development. It's as
simple as adding this to your mozconfig (assuming sccache is in your
$PATH):

  ac_add_options --with-ccache=sccache

The major benefit you gain over ccache is that sccache can cache Rust
compilation as well, and the amount of Rust code we're adding to Firefox
is growing quickly. (We're on track to enable building Stylo by default
soon, which will add quite a bit of Rust.)

On my several-year-old Linux machine (Intel(R) Core(TM) i7-3770 CPU @
3.40GHz, 32GB, SSD), if I build; clobber; build with sccache enabled the
second (fully-cached) build completes in just over 4 minutes:

  4:11.92 Overall system resources - Wall time: 252s; CPU: 69%; Read
  bytes: 491520; Write bytes: 6626512896; Read time: 60; Write time:
  1674852

sccache still isn't completely straightforward to use on Windows[1] but
I aim to fix that this quarter so that using it there will be just as
simple as on other platforms.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1318370
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: sccache as ccache

2017-07-26 Thread Ted Mielczarek
On Wed, Jul 26, 2017, at 10:46 AM, Kan-Ru Chen wrote:
> Windows support sounds very exciting! Will it support cache sharing?

Currently sccache supports a few different cache storage backends:
* local disk
* Amazon S3
* Google Cloud Storage
* Redis

However, the cache keys currently wind up with full source paths
included, so it's hard to get cache hits across machines when using
different source directories. There's an sccache issue filed on this
(and a patch sitting in the pull requests, actually), so it might be
possible to make that work.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: sccache as ccache

2017-07-26 Thread Ted Mielczarek
On Wed, Jul 26, 2017, at 12:57 PM, Simon Sapin wrote:
> On 26/07/2017 15:05, Ted Mielczarek wrote:
> >ac_add_options --with-ccache=sccache
> 
> When used together with icecc, this appears to force all jobs to run 
> locally which makes icecc pointless.

We should figure out what's going on here and see if we can fix it. It
would be nice to make this work properly.

> I’ve ended up keeping "classic" ccache for C and C++ code and adding 
> 'export RUSTC_WRAPPER=sccache' to my mach wrapper script in order to use 
> sccache for Rust code. (Having this line with or without 'export' in 
> .mozconfig did not appear to do anything. Can mozconfig set arbitrary 
> environment variables?)

No, mozconfig variable setting is sort of restricted. Bare variable
assignments (with or without export) are evaluated in the context of
configure, but don't survive to the Makefile environment. If you write
it as `mk_add_options export RUSTC_WRAPPER=sccache` that ought to work,
since that will be set as an exported Makefile variable, meaning it will
be set in the environment for commands executed by make.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: refcounting [WAS: More Rust code]

2017-08-02 Thread Ted Mielczarek
On Wed, Aug 2, 2017, at 08:32 AM, Nathan Froyd wrote:
> On Wed, Aug 2, 2017 at 7:37 AM, Enrico Weigelt, metux IT consult
>  wrote:
> > On 31.07.2017 13:53, smaug wrote:
> >> Reference counting is needed always if both JS and C++ can have a
> >> pointer to the object.
> >
> > Anybody already thought about garbage collection ?
> 
> Reference counting is a garbage collection technique.  See
> https://en.wikipedia.org/wiki/Reference_counting where the
> introductory paragraphs and the first section specifically refer to it
> as a garbage collection technique.  Or consult _The Garbage Collection
> Handbook_ by Jones, Hosking, and Moss, which has an entire chapter
> devoted to reference counting.
> 
> Note also that Gecko's reference counting tends to be cheaper than the
> reference counting assumed in the literature, since many of Gecko's
> reference-counted objects can use non-thread-safe reference counting,
> as said objects are only ever accessed on a single thread.  (Compare
> http://robert.ocallahan.org/2012/06/computer-science-in-beijing.html)
> 
> Changing the garbage collection technique used by our C++ code to
> something other than reference counting would be a large project of
> dubious worth.

Also we tried that once and it didn't work well for various reasons:
https://wiki.mozilla.org/XPCOMGC

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: sccache as ccache

2017-08-02 Thread Ted Mielczarek
On Wed, Aug 2, 2017, at 12:26 PM, Ben Kelly wrote:
> On Wed, Jul 26, 2017 at 9:05 AM, Ted Mielczarek
>  wrote:>> Yesterday I published sccache 0.2 to 
> crates.io, so you can now `cargo>>  install sccache` and get the latest 
> version (it'll install to
>>  ~/.cargo/bin).
> 
> I tried this on my linux build machine today and got:
> 
> error: failed to run custom build command for `openssl-sys v0.9.15`
<...>

You need to install the `libssl-dev` package (on Ubuntu) or the
equivalent on other distros. Sorry, wish this was clearer!
-Ted

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox and clang-cl

2017-08-15 Thread Ted Mielczarek
On Mon, Aug 14, 2017, at 04:36 PM, Ehsan Akhgari wrote:
> > * Performance: We switched from msvc+pgo to clang without pgo and got 
> > comparable perf. We did have to use an order file (/order: flag to 
> > link.exe) to get comparable startup perf.
> That is very interesting!  This is one of the aspects that we have been 
> worried about a lot.  We should probably also think about using /order 
> as well.

It seems plausible that we could use our existing PGO build steps to
capture the proper ordering and then re-link using that as input to
/order. We already instrument the order of access of omni.ja entries
during that step and use that to produce an optimized omni.ja in the
second build pass.

> > *Debuggability: Basically works, see blockers of https://crbug.com/636111 
> > for in-progress work. link.exe can produce pdbs with clang's codeview debug 
> > info.
> Wow, it looks like things have improved quite a bit on this front since 
> the last time I looked at this closely.  Really impressive work!
> > -Z7 and -Zi are aliased to each other in clang-cl, we don't do mspdbsrv)
> I think this should be sufficient for Firefox's needs as well.

We already build all of our non-PGO Windows builds with -Z7 for
compatibility with sccache anyway:
https://dxr.mozilla.org/mozilla-central/rev/b95b1638db48fc3d450b95b98da6bcd2f9326d2f/build/mozconfig.cache#137


-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: sccache as ccache

2017-08-18 Thread Ted Mielczarek
On Thu, Aug 17, 2017, at 10:01 PM, Mike Hommey wrote:
> On Wed, Jul 26, 2017 at 09:54:14AM -0400, Alex Gaynor wrote:
> > If you're on macOS, you can also get sccache with `brew install sccache`.
> 
> If you're on macOS and were hitting errors building openvr, this is
> fixed in sccache master.

I've also now published a 0.2.1 release to crates.io, so you can `cargo
install --force sccache` to get a release containing this fix.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-31 Thread Ted Mielczarek
On Wed, Aug 30, 2017, at 08:20 PM, Eric Rescorla wrote:
> I assume this is going to involve TLS (generally this is a requirement
> for
> H2). In Firefox, this is done with NSS. Does Tokio/Hyper cleanly separate
> out the TLS stack so that you can do that?

This was mostly answered in another reply, but just to be clear: yes,
Hyper allows plugging in alternate TLS stacks. This is very commonly
used with the `native-tls` crate[1] by way of `hyper-tls`[2], which uses
the native TLS stack on Windows/macOS, and OpenSSL on Linux.

1. https://github.com/sfackler/rust-native-tls
2. https://github.com/hyperium/hyper-tls
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: symbol server now available via https

2017-09-01 Thread Ted Mielczarek
Just an FYI, yesterday the symbol server (symbols.mozilla.org) was moved
to a different server backend[1], and as a result it's now also
available via https. It should continue to work seamlessly at the
existing URL, but you can update your symbol server paths to
https://symbols.mozilla.org/ if you'd like to add end-to-end encryption
to your symbol server requests.

Thanks to peterbe and miles and the other folks for their hard work in
updating this useful piece of infrastructure!

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1389205
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox 55.* in Windows/Ubuntu - every day CPU 100%/Hangs Up. Please to do something!

2017-09-08 Thread Ted Mielczarek
On Wed, Sep 6, 2017, at 03:45 AM, Alexey Zvyagin wrote:
> Dear developers of Firefox,
> 
> I have 55.* version of Firefox at work and at home
> At work i have Windows 7 OS, at home the Ubuntu 16.04 LTE
> I have my own synced profile. I don't have problem with syncing...
> 
> In both OSes i regulary and every day i see the situation when after few
> work time (may be even after 1-5 minutes) of Firefox i see the CPU 100%
> of Firefox and the Firefox hangs up after... Only one way i have for
> repairing of this: "killall -HUP firefox" command at Ubuntu and "End
> Task" in Windows.
> 
> In both cases i have turned on option about "report health" and about
> "reports to Mozilla". I know about the page "about:crashes" and regulary
> to see there. The main problem is there: this hangings up cases are not
> generated there. When i killed the Firefox and restarted its i didn't see
> fresh some reports in "about:crashes". I tried to tun on safe mode too.
> In safe mode the Firefox has same buggy behaviour - after some minutes it
> starts to eat 100% of CPU and no responding after... And again in
> "about:crashes" doesn't have there any fresh generated reports
> (about:crashes sometime has reports there which were created by other
> situation - may be by like "segmentation faults" errors).
> 
> I conclude that this "CPU bug" is not reported to you in many user cases!
> And i conclude that this is global problem (some OSes) not related with
> hardware and plugins.
> 
> I don't know how to report about this to you. I love your browser but
> your last version (55.0.2 & 55.0.3) are very very buggy. This is very
> annoying!

Hi Alexy,

We do have some telemetry for browser hangs. If you load
about:telemetry#chrome-hangs-tab in your browser, do you see any reports
there?

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Re-visiting the DOM tree depth limit in layout

2017-09-15 Thread Ted Mielczarek
On Thu, Sep 14, 2017, at 02:23 AM, Henri Sivonen wrote:
> Do I understand correctly that this is an address space issue only and
> Windows doesn't actually physically map the pages belonging to the
> stack until they are written to? That is, do I understand correctly
> that there's no obstacle for growing the maximum stack size on 64-bit
> Windows to the kind of numbers that Mac and Linux already have?
> 
> Is there a reason why a larger stack size is OK on 32-bit Linux but
> wouldn't be OK on 32-bit Windows? (Seems kinda weird that both
> defaults would just happen to be exactly perfect even when they are so
> different.)

One notable difference is that by default the 32-bit Linux kernel
provides 3GB of usable address space to programs and reserves 1GB for
the kernel[1], but the 32-bit Windows kernel only provides 2GB of usable
address space, reserving the other 2GB for the kernel[2]. It's possible
to increase that to 3GB by changing a boot parameter, but I doubt that's
a common occurrence. On both operating systems 32-bit applications
running on a 64-bit kernel get access to a full 4GB of address space.

-Ted

1.
https://www.quora.com/Why-do-32-bit-Linux-kernels-only-recognize-3GB-of-RAM
2.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366912(v=vs.85).aspx
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to get pretty stack trace on Linux?

2017-09-21 Thread Ted Mielczarek
On Thu, Sep 21, 2017, at 08:51 PM, Masayuki Nakano wrote:
> I'd like to get pretty stack trance which shows method names rather than 
> only address with tryserver build on Linux. However, 
> nsTraceRefcnt::WalkTheStack() cannot get method names on Linux as you
> know.
> 
> The reason why I need to get it is, I have a bug report which depends on 
> the environment and I cannot reproduce it on my any environments. 
> Therefore, I'd like the reporter to log the stack trace when it occurs 
> with MOZ_LOG.
> 
> My questions are, how to or is it possible to get pretty stack trace on 
> Linux with MOZ_LOG? And/or do you have better idea to get similar 
> information to check which path causes a bug.
> 
> If it's impossible, I'll create a tryserver build with each ancestor 
> caller logs the path, though.


Hi Masayuki,

Our test harnesses accomplish this by piping the output of Firefox
through one of the stack fixing scripts in tools/rb[1].
fix_linux_stack.py uses addr2line, which should at least give you
function symbols on Nightly. You could use my GDB symbol server
script[2] to fetch the actual debug symbols from the symbol server if
you want full source line information.

Regards,
-Ted


1. https://dxr.mozilla.org/mozilla-central/source/tools/rb
2. https://gist.github.com/luser/193572147c401c8a965c
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MinGW Target on TaskCluster

2017-10-09 Thread Ted Mielczarek
On Mon, Oct 9, 2017, at 01:31 AM, Tom Ritter wrote:
> As part of our work with Tor, we’ve been working on getting a MinGW-based
> build of Windows into TaskCluster. Tor is currently using ESR releases,
> and
> every ESR they have to go through a large amount of work to get the build
> working under MinGW again; by continually building (and testing) that
> build
> we’ll be able to cut weeks to months of effort for them each ESR release.
> (Not breaking the MinGW build is also a necessity if they were ever to
> move
> off ESRs.)

Great work, Tom! I know this was a long slog, but keeping this build
working is going to lift a massive burden from the Tor Browser team.
Thanks for taking on this project and driving it to completion!

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-19 Thread Ted Mielczarek
On Wed, Oct 18, 2017, at 07:51 AM, Mark Banner wrote:
> Looping in firefox-dev as well, as I thin this is an important
> discussion.
> 
> On 18/10/2017 09:28, David Teller wrote:
> >  Hi everyone,
> >
> >Yesterday, Nightly was broken on Linux and MacOS because of a typo in
> > JS code [1]. If I understand correctly, this triggered the usual
> > "undefined is not a function", which was
> >
> > 1/ uncaught during testing, as these things often are;
> Part of the reason it was uncaught, is that there's no automated test 
> coverage for that bit of code. It is a migration from one version to the 
> other, but unless there is an explicit test (and I don't see one) that 
> line won't be hit.

Given this bit, would any of the suggestions in this thread actually
help? If we're not exercising this code in automated tests, would "fail
tests on uncaught exceptions" make any difference?

I'm generally in favor of being stricter about errors in our test suites
etc, but I'm curious about whether we would actually have solved the
problem in question.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-19 Thread Ted Mielczarek
On Thu, Oct 19, 2017, at 03:19 PM, Mark Banner wrote:
> The only thing that might help (that has been discussed) is something 
> along the lines of flow - an analyser that could work out that 'spice()' 
> didn't exist, but Dave Townsend mentioned these don't seem to be viable 
> for us.
> 
> Therefore I think the migration code should be having automated tests - 
> it is the only way we can reliably detect these sort of failures at the 
> moment.

That's fair. Note that we do have active code coverage results now,
which shows me that the line containing the bug that started this whole
thread still isn't covered by tests:
https://codecov.io/gh/marco-c/gecko-dev/src/master/browser/components/customizableui/CustomizableUI.jsm#L461

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio 2017 coming soon

2017-10-25 Thread Ted Mielczarek
On Wed, Oct 25, 2017, at 05:48 PM, David Major wrote:
> I'm planning to move production Windows builds to VS2017 (15.4.1) in bug
> 1408789.

Thanks for doing the work on this!


> VS2017 has optimizer improvements that produce faster code. I've seen
> 3-6%
> improvement on Speedometer. There is also increased support for C++14 and
> C++17 language features:
> https://docs.microsoft.com/en-us/cpp/visual-cpp-language-conformance
> 
> These days we tend not to support older VS for too long, so after some
> transition period you can probably expect that VS2017 will be required to
> build locally, ifdefs can be removed, etc. VS2017 Community Edition is a
> free download and it can coexist with previous compilers. Installation
> instructions are at:
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Windows_Prerequisites#Visual_Studio_2017

Lately we've settled on maintaining support for ~1 release cycle, which
gives us a buffer before we rip out support in case we need to roll back
because we've found a major compiler bug or something like that. It's
easier to justify maintaining support if we have CI to ensure that we're
not constantly breaking things. On the other hand, it's easier to
justify dropping support if VS is the last compiler holding us back from
being able to use new C++ features.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How much context does CF_HTML really need?

2017-10-31 Thread Ted Mielczarek
On Tue, Oct 31, 2017, at 05:46 AM, Henri Sivonen wrote:
> (Context: I'm trying to understand the requirements for our
> serializers in case we rewrite them [in Rust].)
> 
> The HTML fragment parsing algorithm can have only one context node.
> The context is never a chain of nodes towards to the root, since such
> a thing wouldn't affect the result per the HTML parsing algorithm.
> 
> However, when the HTML parsing algorithm is in the non-fragment mode,
> some tags get ignored without appropriate parent, so e.g. to represent
>  in the non-fragment mode, you need to include , etc. But
> that's about it.
> 
> The Windows CF_HTML clipboard format,
> https://msdn.microsoft.com/en-us/library/windows/desktop/ms649015(v=vs.85).aspx
> , represents fragments by designating them in a full HTML document, so
> what are logically fragments have to work with non-fragment parsing.
> 
> This indicates that when we export a fragment to the clipboard, we
> should serialize its parent if not table-related or reconstruct a full
> table if table-related.
> 
> Yet, it seems that we serialize much more ancestor context.
> 
> Is there a good reason to? For example, does Microsoft office (our old
> bugs suggest that Excel is the pickiest consumer) or other CF_HTML
> consumers on Windows care about more context than the standard HTML
> parsing algorithm? What could consumers possibly do with knowlegde
> about ancestors beyond parent or the nearest ? (I'm ignoring
> SVG and MathML for the moment.)
> 
> OTOH, it seems that we include only some element types in the context
> (https://searchfox.org/mozilla-central/source/dom/base/nsDocumentEncoder.cpp#1540).
> It's unclear to me why. The first revision of the list came from jst
> during the Netscape 6 crunch without an explanation either in Bugzilla
> or code comments. (https://bugzilla.mozilla.org/show_bug.cgi?id=50742)
> 
> Does anyone know why?

I don't know exactly why, but I did try to fix pasting table cells into
Excel a long time ago (someone else eventually fixed it), and it was
definitely tricky and underspecified:
https://bugzilla.mozilla.org/show_bug.cgi?id=137450

Comments on the bug indicate that there are non-table cases where the
context is important, like `` to ensure you wind up pasting
numbered list items.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox 55.* in Windows/Ubuntu - every day CPU 100%/Hangs Up. Please to do something!

2017-11-21 Thread Ted Mielczarek
On Tue, Nov 21, 2017, at 06:54 AM, Alexey Zvyagin wrote:
> Hi!
> 
> I made some crashes by hands (crashfirefox.exe) in Windows 7 and in Unix
> through kill -ABRT
> 
> What are the symptoms? In random moments the Firefox v56.* has only-one
> core CPU 100% eating. In Windows 7 (64bit) & Linux (Ubuntu 16.04 LTE
> 64bit)
> 
> Reports are here:
> 
> Ubuntu OS:
> 
> https://crash-stats.mozilla.com/report/index/0b0e6273-26fb-482e-b033-c91be1171101
> https://crash-stats.mozilla.com/report/index/237ae0e4-6eb2-4c8b-87e8-3c2471171101
> https://crash-stats.mozilla.com/report/index/7ddfad60-8f3e-4495-a05f-5d6d21171110
> https://crash-stats.mozilla.com/report/index/95468eb1-28b2-40f7-8f0c-8a7261171110
> https://crash-stats.mozilla.com/report/index/cd310102-c547-486f-bbd3-0b7791171110
> https://crash-stats.mozilla.com/report/index/6df179b4-721a-4440-97e5-059d21171110
> 
> Windows 7:
> 
> https://crash-stats.mozilla.com/report/index/8e172c11-2367-43b1-98f2-128251171113#allthreads

Hi Alexey,

These crashes all seem to be stuck in sqlite code querying the places
database that contains browsing history. In the Windows crash, you can
see that thread 0 is waiting on a sqlite mutex for the database, and
thread 44 is in the middle of running some sort of sqlite query, so
presumably it's holding the mutex.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Ted Mielczarek
On Tue, Jan 16, 2018, at 10:51 AM, Jean-Yves Avenard wrote:
> Sorry for resuming an old thread.
> 
> But I would be interested in knowing how long that same Lenovo P710
> takes to compile *today*….> In the past 6 months, compilation times have 
> certainly increased
> massively.> 
> Anyhow, I’ve received yesterday the iMac Pro I ordered early December.
> It’s a 10 cores Xeon-W (W-2150B) with 64GB RAM> 
> Here are the timings I measured, in comparison with the Mac Pro 2013 I
> have (which until today was the fastest machines I had ever used)> 
> macOS 10.13.2:
> Mac Pro late 2013 : 13m25s
> iMac Pro : 7m20s
> 
> Windows 10 fall creator
> Mac Pro late 2013 : 24m32s (was 16 minutes less than a year ago!)
> iMac Pro : 14m07s (16m10s with windows defender going)
> 
> Interestingly, I can almost no longer get any benefits when using
> icecream, with 36 cores it saves 11s, with 52 cores it saves 50s only…> 
> It’s a very sweet machine indeed

I just did a couple of clobber builds against the tip of central
(9be7249e74fd) on my P710 running Windows 10 Fall Creators Update
and they took about 22 minutes each. Definitely slower than it
used to be :-/
-Ted


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ci, Cr, Cc, and Cu are now automatically defined in all chrome scopes

2018-02-02 Thread Ted Mielczarek
On Thu, Feb 1, 2018, at 5:11 PM, Andrew McCreight wrote:
> Bug 767640 just merged to mozilla-central. This patch makes it so that Ci,
> Cr, Cc, and Cu are automatically defined in any chrome scope that has a
> Components object. Rejoice, because you no longer need to add "var Ci =
> Components.interfaces" into every file.
> 
> I have a followup bug, bug 1432992, that removes almost all of the existing
> definitions of these variables.

Nice! I can't believe it took this long for someone to realize this was a good 
idea. :)

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: gkrust compilation RAM requirements and 32-bit systems

2018-02-09 Thread Ted Mielczarek
On Fri, Feb 9, 2018, at 4:49 AM, Henri Sivonen wrote:
> Is it expected that Firefox can no longer be built on a 32-bit system?

Yes.
 
> The cross-compilation documentation on MDN seems to predate Rust code
> in Firefox. Is there an up-to-date guide for compiling Firefox for
> ARMv7+NEON (or aarch64 for that matter) GNU/Linux on an x86_64
> GNU/Linux host?

I don't know that there have ever been good docs for this scenario--our 
well-supported cross-compile scenario is Android, which has its own SDK. 
Cross-compiling other things on Linux without a chroot is a giant PITA. jryans 
did write a good blog post[1] last year about building a 32-bit Firefox on 
64-bit Linux, which might be useful. AFAIK Rust is the easy part, since you can 
just `rustup target add armv7-unknown-linux-gnueabihf` or whatever target you 
need. Getting all the -dev packages for various system libraries is the hard 
part.

-Ted

1. https://convolv.es/blog/2017/08/25/building-firefox-for-linux-32-bit/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Debugging Try server builds

2018-02-12 Thread Ted Mielczarek
If you've debugged builds from the try server in the past (or wanted to but 
found the process too difficult) I'd be interested to hear from you. 
Historically this has been a painful process with many manual steps[1], but 
thanks to some work done[2] by Peter Bengtsson on the new symbols.mozilla.org 
service we are now in a position where we should be able to make this much 
simpler. I've filed a bug[3] with some initial details and thoughts but I'd 
like to hear from people who have needed this in the past to make sure that we 
wind up with something that's useful to you.

The base assumption that I did not list in that bug is that we're not likely to 
make every try push upload symbols for every single build it does--that's a lot 
of data and I suspect that most people don't need it, so we'd be wasting 
resources for minimal gain. (We don't upload symbols to the symbol server for 
anything that's not a nightly or release build currently--builds we ship to 
users.) Given that, what would be most useful to you? A flag to pass to `mach 
try` to get symbols uploaded for the builds for that push? A Treeherder action 
that would retroactively upload symbols for one or more builds from a try push? 
I'd love to hear from you!

Thanks,
-Ted

1. https://wiki.mozilla.org/ReleaseEngineering/TryServer#Getting_debug_symbols
2. https://bugzilla.mozilla.org/show_bug.cgi?id=1422096
3. https://bugzilla.mozilla.org/show_bug.cgi?id=1437577
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Chrome-only WebIDL interfaces no longer require DOM peer review

2018-03-09 Thread Ted Mielczarek
On Thu, Mar 8, 2018, at 7:41 PM, Bobby Holley wrote:
> (C) The API uses complex arguments like promises that XPIDL doesn't handle
> in a nice way.

I think this is an understated point. WebIDL was designed explicitly to allow 
expressing the semantics of JS APIs, where XPIDL is some arbitrary set of 
things designed by folks at Netscape a long time ago. Almost any non-trivial 
API will wind up being worse in XPIDL (and the C++ implementation side is worse 
as well).

I agree that an XPConnect-alike supporting WebIDL semantics would be a lot of 
work, but I also think that asking developers to implement chrome interfaces 
with XPIDL is pretty lousy.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


FYI: sccache 0.2.6 released, contains fix for frequent hang in 0.2.5

2018-03-13 Thread Ted Mielczarek
Hello,

Yesterday I tagged and released sccache 0.2.6:
https://github.com/mozilla/sccache/releases/tag/0.2.6

This contains a fix for a hang that users were encountering with sccache 0.2.5 
due to the make jobserver support added in that version. If you are using 0.2.5 
you will want to update. If you were holding off on updating because of that 
bug, you should now be able to update without issues.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reading data needed by MOZ_GTEST_BENCH outside the timed benchmark

2018-03-15 Thread Ted Mielczarek
On Thu, Mar 15, 2018, at 7:22 AM, Henri Sivonen wrote:
> Do we have a way to read the prerequisite data for MOZ_GTEST_BENCH
> outside MOZ_GTEST_BENCH so that the disk IO doesn't get timed?

I don't know that we have any stock way to do this. I can offer three plausible 
solutions:
1) If putting the data in a Rust crate is feasible, use `include_bytes!` in the 
toolkit/library/gtest/rust crate[1], which gets linked into the gtest libxul.
2) Write a `GENERATED_FILES` script that takes the data file and outputs a 
header with a C array of bytes and #include that in the GTest.
3) Not the best solution, but for ICU data I have yasm / gas assembly files[2] 
that include the ICU .dat as a symbol.

1. 
https://dxr.mozilla.org/mozilla-central/source/toolkit/library/gtest/rust/lib.rs
2. https://dxr.mozilla.org/mozilla-central/source/config/external/icu/data
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reading data needed by MOZ_GTEST_BENCH outside the timed benchmark

2018-03-15 Thread Ted Mielczarek
On Thu, Mar 15, 2018, at 9:07 AM, Henri Sivonen wrote:
> On Thu, Mar 15, 2018 at 2:51 PM, Ted Mielczarek  wrote:
> > On Thu, Mar 15, 2018, at 7:22 AM, Henri Sivonen wrote:
> >> Do we have a way to read the prerequisite data for MOZ_GTEST_BENCH
> >> outside MOZ_GTEST_BENCH so that the disk IO doesn't get timed?
> >
> > I don't know that we have any stock way to do this. I can offer three 
> > plausible solutions:
> > 1) If putting the data in a Rust crate is feasible, use `include_bytes!` in 
> > the toolkit/library/gtest/rust crate[1], which gets linked into the gtest 
> > libxul.
> > 2) Write a `GENERATED_FILES` script that takes the data file and outputs a 
> > header with a C array of bytes and #include that in the GTest.
> > 3) Not the best solution, but for ICU data I have yasm / gas assembly 
> > files[2] that include the ICU .dat as a symbol.
> 
> All of these involve putting the data inside libxul somehow. The
> reason why the data isn't there already is that the data is under a
> license that's prohibited in Gecko code. Maybe we just need a policy
> opinion that while such data must not be baked into Gecko that gets
> distributed but that baking it into a gtest-only binaries is actually
> harmless. I'll try to get such a policy opinion.

Oh, it completely slipped my mind, but froydnj just added what you need very 
recently: `MOZ_GTEST_BENCH_F`:
https://dxr.mozilla.org/mozilla-central/rev/6ff60a083701d08c52702daf50f28e8f46ae3a1c/testing/gtest/mozilla/MozGTestBench.h#22

It's essentially just exposing Google Test's `TEST_F` to have a test with a 
fixture, which lets you implement a `SetUp` method to do work before running 
the test, which for a benchmark will happen outside of the measurement. There 
aren't any uses of it in the tree yet, but it should look identical to using 
`TEST_F`, which you can see an example of here:
https://dxr.mozilla.org/mozilla-central/rev/6ff60a083701d08c52702daf50f28e8f46ae3a1c/dom/media/gtest/TestMP3Demuxer.cpp#83

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: incremental compilation for opt Rust builds

2018-04-09 Thread Ted Mielczarek
On Thu, Apr 5, 2018, at 8:19 AM, Henri Sivonen wrote:
> For encoding_rs, -O2 vs -O3 has pretty big performance effects in both
> directions. (Didn't measure code size.) I think I'd rather have the
> -O3 scenario than the -O2 scenario for encoding_rs.
> 
> Can we make a particular vendored crate (encoding_rs) use -O3 while
> the default for Rust code remains at -O2?

Unfortunately there's no support for this in cargo right now. Manish has an RFC 
that would add a way to do this:
http://rust-lang.github.io/rfcs/2282-profile-dependencies.html
https://github.com/rust-lang/rust/issues/48683

The folks working on webrender have also asked for this:
https://bugzilla.mozilla.org/show_bug.cgi?id=1413285

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent To Require Manifests For Vendored Code In mozilla-central

2018-04-10 Thread Ted Mielczarek
On Tue, Apr 10, 2018, at 9:23 AM, James Graham wrote:
> On 10/04/2018 05:25, glob wrote:
> > mozilla-central contains code vendored from external sources. Currently 
> > there is no standard way to document and update this code. In order to 
> > facilitate automation around auditing, vendoring, and linting we intend 
> > to require all vendored code to be annotated with an in-tree YAML file, 
> > and for the vendoring process to be standardised and automated.
> > 
> > 
> > The plan is to create a YAML file for each library containing metadata 
> > such as the homepage url, vendored version, bugzilla component, etc. See 
> > https://goo.gl/QZyz4xfor the full specification.
> 
> So we now have moz.build that in addition to build instructions, 
> contains metadata for mozilla-authored code (e.g. bugzilla components) 
> and moz.yaml that will contain similar metadata but only for 
> non-mozilla-authored code, as well as Cargo.toml that will contain (some 
> of) that metadata but only for code written in Rust.
> 
> As someone who ended up having to write code to update moz.build files 
> programatically, the situation where we have similar metadata spread 
> over three different kinds of files, one of them Turing complete, 
> doesn't make me happy. Rust may be unsolvable, but it would be good if 
> we didn't have two mozilla-specific formats for specifying metadata 
> about source files. It would be especially good if updating this 
> metadata didn't require pattern matching a Python AST.

We are in fact rethinking the decision to put file metadata in moz.build files 
for these very reasons. I floated the idea of having it live in these same YAML 
files that glob is proposing for vendoring info since it feels very similar. I 
don't want to block his initial work on tangentially-related concerns, but I 
think we should definitely look into this once he gets a first version of his 
vendoring proposal working. I don't know if there's anything useful we can do 
about Cargo.toml--we obviously want to continue using existing Rust practices 
there. If there are specific things you need to do that are hard because of 
that I'd be interested to hear about them to see if there's anything we can 
improve.

FWIW, moz.build files being Turing complete is a thing that makes me sad very 
often. If I had a better solution that didn't involve a huge amount of 
engineering work I'd be all for it!

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Editing a vendored crate for a try push

2018-04-10 Thread Ted Mielczarek
On Mon, Apr 9, 2018, at 7:18 AM, Henri Sivonen wrote:
> What's the current status of tooling for editing vendored crates for
> local testing and try pushes?
> 
> It looks like our toml setup is too complex for cargo edit-locally to
> handle (or, alternatively, I'm holding it wrong). It also seems that
> mach edit-crate never happened.
> 
> How do I waive .cargo-checksum.json checking for a crate?

I don't think we have any tooling around this currently. bug 1323557 is still 
open but hasn't seen any real movement. I did test a potential workflow last 
year[1] that worked at the time, but recent comments suggest that it no longer 
works. I haven't tried it again with a newer cargo/rustc so I don't know what 
the errors are, but if that no longer works we should figure out if we can fix 
whatever broke it.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1323557#c3
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2018-04-25 Thread Ted Mielczarek
On Wed, Apr 25, 2018, at 12:32 PM, Jeff Muizelaar wrote:
> At minimum we should make --enable-profiling build with rust-opt.

This sounds reasonable, although the quirk is that we default 
--enable-profiling on for nightly[1], so anyone building m-c will have it 
enabled. We could make the build system only do this for "explicitly enabled 
--enable-profiling", it just might be slightly confusing. (For reference, the 
rustc opt level is controlled here[2].)

IIRC the slowdown from using opt-level=2 vs. 1 is primarily in LLVM? It would 
probably be useful if we instrumented the build to record compilation times for 
all of the crates we build at various optimization levels and see where the 
biggest issues are. Perhaps we could get someone on the Rust team to make some 
improvements if we know what the worst offenders are.

-Ted

1. 
https://dxr.mozilla.org/mozilla-central/rev/26e53729a10976f52e75efa44e17b5e054969fec/js/moz.configure#243
2. 
https://dxr.mozilla.org/mozilla-central/rev/26e53729a10976f52e75efa44e17b5e054969fec/build/moz.configure/toolchain.configure#1397
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Uploading symbols from try builds

2018-05-08 Thread Ted Mielczarek
Hello,

I landed some patches yesterday[1] that have now merged to m-c to allow 
uploading symbols from try builds to the symbol server. Previously if you 
wanted to debug a try server build it involved a bunch of annoying manual 
steps, but now you can simply ask for symbol upload tasks to run and then use 
the symbol server almost exactly like you would with a nightly or release build.

Symbol upload isn't enabled by default (since I expect most people are unlikely 
to need it), but there are two ways to enable it:
1) Run `mach try fuzzy --full` and select the `${build}-upload-symbols` tasks 
that correspond to the builds you're requesting. These tasks won't show up 
without `--full`, FYI.
2) After pushing to try, use Treeherder's "Add new jobs" tool (available from 
the menu under the little triangle on the top right) and add the `Sym` jobs 
corresponding to the builds you requested.

To debug the resulting build you will need to use a slightly different URL for 
the symbol server: https://symbols.mozilla.org/try . You can otherwise follow 
the symbol server instructions[2]. Try symbols are stored separately from 
builds we ship to avoid any contamination as well as to have a shorter 
retention period (28 days, matching the Taskcluster artifact expiration for 
try). This won't currently work with my GDB symbol server script[3] because it 
has a hardcoded symbol server URL, but you can of course edit the script to fix 
that.

I'll update the symbol server docs soon.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1437577
2. 
https://developer.mozilla.org/en-US/docs/Mozilla/Using_the_Mozilla_symbol_server
3. https://gist.github.com/luser/193572147c401c8a965c
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing tinderbox-builds from archive.mozilla.org

2018-05-09 Thread Ted Mielczarek
On Wed, May 9, 2018, at 1:11 PM, L. David Baron wrote:
> > mozregression won't be able to bisect into inbound branches then, but I
> > believe we've always been expiring build artifacts created from integration
> > branches after a few months in any case.
> > 
> > My impression was that people use mozregression primarily for tracking down
> > relatively recent regressions. Please correct me if I'm wrong.
> 
> It's useful for tracking down regressions no matter how old the
> regression is; I pretty regularly see mozregression finding useful
> data on bugs that regressed multiple years ago.

To be clear here--we still have an archive of nightly builds dating back to 
2004, so you should be able to bisect to a single day using that. We haven't 
ever had a great policy for retaining individual CI builds like these 
tinderbox-builds. They're definitely useful, and storage is not that expensive, 
but given the number of build configurations we produce nowadays and the volume 
of changes being pushed we can't archive everything forever.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Gecko natvis file now in-tree

2018-05-09 Thread Ted Mielczarek
Hello,

I recently landed a patch[1] that added a Gecko.natvis file[2] to the tree. 
natvis files[3] are Microsoft's current way of providing nicer views of data 
types for their debuggers. The file as-landed was written by Vlad a few years 
ago so it could definitely use some changes (there's a followup bug[4] for that 
work). If you debug Firefox in MSVC regularly and there are Gecko data types 
that you find annoying to inspect you should take a look, and submit fixes for 
things it doesn't already cover! There's also a followup bug[5] to embed the 
natvis files that the Rust compiler ships so we can get nicer views of data 
types from the Rust standard library.

And in case you're wondering--there are already GDB pretty-printers for Gecko 
types in the tree[6][7], feel free to use and improve those as well!

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1408502
2. 
https://hg.mozilla.org/mozilla-central/file/856384fbc255/toolkit/library/gecko.natvis
3. 
https://docs.microsoft.com/en-us/visualstudio/debugger/create-custom-views-of-native-objects
4. https://bugzilla.mozilla.org/show_bug.cgi?id=1459989
5. https://bugzilla.mozilla.org/show_bug.cgi?id=1459991
6. https://hg.mozilla.org/mozilla-central/file/tip/.gdbinit
7. 
https://hg.mozilla.org/mozilla-central/file/tip/third_party/python/gdbpp/gdbpp
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Gecko natvis file now in-tree

2018-05-09 Thread Ted Mielczarek
Gecko.natvis is embedded in the PDB file, so it should Just Work for debugging 
local builds as well as builds from CI.

-Ted

On Wed, May 9, 2018, at 3:34 PM, Aaron Klotz wrote:
> This is great news! Are the natvis files embedded in the PDBs, or do we 
> have to reference them separately?
> 
> On 5/9/2018 1:17 PM, Ted Mielczarek wrote:
> > Hello,
> >
> > I recently landed a patch[1] that added a Gecko.natvis file[2] to the tree. 
> > natvis files[3] are Microsoft's current way of providing nicer views of 
> > data types for their debuggers. The file as-landed was written by Vlad a 
> > few years ago so it could definitely use some changes (there's a followup 
> > bug[4] for that work). If you debug Firefox in MSVC regularly and there are 
> > Gecko data types that you find annoying to inspect you should take a look, 
> > and submit fixes for things it doesn't already cover! There's also a 
> > followup bug[5] to embed the natvis files that the Rust compiler ships so 
> > we can get nicer views of data types from the Rust standard library.
> >
> > And in case you're wondering--there are already GDB pretty-printers for 
> > Gecko types in the tree[6][7], feel free to use and improve those as well!
> >
> > -Ted
> >
> > 1. https://bugzilla.mozilla.org/show_bug.cgi?id=1408502
> > 2. 
> > https://hg.mozilla.org/mozilla-central/file/856384fbc255/toolkit/library/gecko.natvis
> > 3. 
> > https://docs.microsoft.com/en-us/visualstudio/debugger/create-custom-views-of-native-objects
> > 4. https://bugzilla.mozilla.org/show_bug.cgi?id=1459989
> > 5. https://bugzilla.mozilla.org/show_bug.cgi?id=1459991
> > 6. https://hg.mozilla.org/mozilla-central/file/tip/.gdbinit
> > 7. 
> > https://hg.mozilla.org/mozilla-central/file/tip/third_party/python/gdbpp/gdbpp
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust crate approval

2018-07-05 Thread Ted Mielczarek
On Sun, Jul 1, 2018, at 7:56 PM, Xidorn Quan wrote:
> The point is that adding a new crate dependency is too easy 
> accidentally, and it is very possible for reviewers to overlook that. So 
> it may make sense to introduce a blacklist-ish thing to avoid that to 
> happen.

FYI, we had some discussion about the policy and mechanisms of reviewing 
vendored Rust crates in the recent past. I floated a strawman proposal[1] that 
didn't seem to upset anyone, but we got thrown off track by the Servo VCS sync 
needing to do auto-vendoring. AIUI, now that the pace of the stylo work has 
slowed, the Servo syncing is being done on a manual basis, so it seems like we 
could revisit that discussion.

The TL;DR on my proposal is: "We should make sure that someone has reviewed 
each new vendored crate in a bug separate from the one with the patch that adds 
code using it."

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1322798#c11
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Automated code analysis now also in Phabricator

2018-07-17 Thread Ted Mielczarek
On Tue, Jul 17, 2018, at 9:22 AM, Jan Keromnes wrote:
> TL;DR -- “reviewbot” is now enabled in Phabricator. It reports potential
> defects in pending patches for Firefox.

Great work! This sounds super useful!

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2018-07-19 Thread Ted Mielczarek
On Wed, Jul 18, 2018, at 12:45 PM, Botond Ballo wrote:
> Hi everyone,
> 
> With the proposal for a standard 2D graphics library now on ice [1],
> members of the C++ standards committee have been investigating
> alternative ways of giving C++ programmers a standard way to write
> graphical and interactive applications, in a way that leverages
> existing standards and imposes a lower workload on the committee.
> 
> A recent proposal along these lines is for a standard embedding
> facility called "web_view", inspired by existing embedding APIs like
> Android's WebView:
> 
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1108r0.html
> 
> As we have some experience in the embedding space here at Mozilla, I
> was wondering if anyone had feedback on this embedding library
> proposal. This is an early-stage proposal, so high-level feedback on
> the design and overall approach is likely to be welcome.

I've joked about this a bit, but in seriousness: an API for web embedding is a 
difficult thing to get right. We don't even have one currently for desktop 
Firefox. The proposal references things like various WebKit bindings, but 
glosses over the fact that Apple revamped WebKit APIs as WebKit2 to better 
handle process separation. For all the buzz about WebKit being a popular web 
embedding, most people seem to have switched to embedding Chromium in some form 
these days, and even there the most popular projects are Chromium Embedded 
Framework and Electron, neither of which is actually maintained by Google and 
both of which have gone through significant API churn. That is all to say that 
I don't have confidence that the C++ standards committee (or maybe anyone, 
really) has the ability to spec a useful API for web embedding that can both 
encompass the broad set of issues involved and also remain useful over time as 
rendering engines evolve.

I understand the committee's point of view--the C++ standard library does not 
provide any facilities for writing applications that do more than console input 
and output. I would submit that this is OK, because UI programming in any form 
is a complicated topic and it's unlikely that the standard could include 
anything that would actually be useful to most people.

Honestly I think at this point growth of the C++ standard library is an 
anti-feature. The committee should figure out how to get modules specified 
(which I understand is a difficult thing, I'm not trying to minimize the work 
there) so that tooling can be built to provide a first-class module ecosystem 
for C++ like Rust and other languages have. The language should provide a 
better means for extensibility and code reuse so that the standard library 
doesn't have to solve everyone's problems.

I would make this same argument if someone were to propose a similar API for 
inclusion into the Rust standard library--it doesn't belong there, it belongs 
on crates.io.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Stopgap for Commit Series in Phabricator

2018-08-06 Thread Ted Mielczarek
Thanks for this, Kris! Just an FYI for anyone planning to try this: I was 
behind a few versions on Mercurial (at 4.3) and I had to update to 4.7 for this 
extension to work.

-Ted

On Thu, Jul 26, 2018, at 12:09 PM, Kris Maglione wrote:
> Here's an approximate equivalent for hg which doesn't require 
> Arcanist:
> 
> https://bitbucket.org/kmaglione/hgext/src/default/phabricator.py
> 
> It's a slightly modified version of stock hg Phabricator plugin 
> (which we apparently have gps to thank for inspiring) which 
> handles parsing bug IDs and reviewers from commit messages.
> 
> You just need to add something like this to your .hgrc:
> 
> [phabricator]
> url = https://phabricator.services.mozilla.com/
> callsign = MOZILLACENTRAL
> 
> [auth]
> mozilla.schemes = https
> mozilla.prefix = phabricator.services.mozilla.com
> mozilla.phabtoken = cli-...
> 
> and then use `hg phabsend` to push a commit series (or `hg phabread` 
> to import one).
> 
> On Wed, Jul 25, 2018 at 04:31:51PM -0400, Nika Layzell wrote:
> >While our services team is working on making a reliable & maintained tool
> >for handling commit series with Phabricator, I threw together something
> >small to use as a stop-gap for pushing large commit series to Phabricator
> >and updating them.
> >
> >It currently works as a wrapper around Arcanist, and *only supports git*
> >(as I don't know how hg works enough to get it to work reliably), but
> >should allow pushing a range of commits as revisions without touching the
> >working tree, automatic dependency relationships, bug number filling, and
> >reviewer field population.
> >
> >I called it 'phlay' (splinter => flay; flay + phabricator => phlay).
> >
> >GitHub: https://github.com/mystor/phlay
> >Tweet w/ short demo clip:
> >https://twitter.com/kneecaw/status/1021434807325163523
> >
> >I've used it to push pretty-much all of my recent patch series to
> >Phabricator, and it's saved me a good amount of time, so I figured I'd let
> >people know. Feel free to poke me on IRC if you have questions.
> >
> >- nika
> 
> -- 
> Kris Maglione
> 
> [T]he people can always be brought to the bidding of the leaders.
> That is easy.  All you have to do is tell them they are being attacked
> and denounce the pacifists for lack of patriotism and exposing the
> country to danger.  It works the same way in any country.
>   --Herman Göring
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Can we make a plan to retire Universal Mac builds?

2015-08-05 Thread Ted Mielczarek
Our Universal Mac builds are a frequent headache for build system work,
being a special snowflake in many ways. They also use twice as much
machine time as other builds, since they do a separate build for each
architecture. I think it's time to make a plan to retire them and ship
single-architecture 64-bit only builds.

As far as I know, there are two main blockers here:
1) Users with 32-bit Apple hardware that can't install a 64-bit OS will
become unsupported. I don't have data on how many users this is, but I
suspect we can determine this from Telemetry. It's my understanding that
the last 32-bit only Apple hardware that was sold was in late 2006, so
it's nearly 9 years old at this point.
2) Currently watching Netflix in Firefox on OS X requires the
Silverlight plugin, which is 32-bit only, so we need to ship a universal
build for this to work. I believe that we are planning to ship an EME
CDM that will work with Netflix in the near future, so this should make
this a non-issue.

For comparison, Chrome dropped support for 32-bit OS X late last year in
Chrome 39[1]. If we have a plan to support Netflix without Silverlight,
and we are OK with unsupporting however many users are stuck on 32-bit
only Apple hardware, I think we should make a plan to switch our
official builds to 64-bit only. Does anyone have any concerns I've
missed?

-Ted

1.
http://www.computerworld.com/article/2849225/chrome-for-os-x-turns-64-bit-forsakes-early-intel-macs.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we make a plan to retire Universal Mac builds?

2015-08-05 Thread Ted Mielczarek
On Wed, Aug 5, 2015, at 07:28 PM, Martin Thomson wrote:
> On Wed, Aug 5, 2015 at 3:59 PM, Matthew N.  wrote:
> > If we have data on CPU architecture I don't think the OS version is relevant
> > unless I'm missing something.
> 
> My understanding is that OS version is all that matters.  64-bit apps
> require a 64-bit OS.  (Such an OS requires a 64-bit processor of
> course.)

Apple shipped Mac OS X with system libraries as universal binaries, so
they supported both 32 and 64-bit binaries. (I believe they stopped
shipping 32-bit libraries at some point, however.)

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we make a plan to retire Universal Mac builds?

2015-08-05 Thread Ted Mielczarek
On Wed, Aug 5, 2015, at 05:14 PM, Syd Polk wrote:
> I don’t think we can do this until we stop supporting Mac OS X 10.6. Last
> time we calculated percentage of users, this was still over 15%. I don’t
> think that very many of them would be running 64-bit, either. 10.7 has
> that problem as well, but it is a very small percentage of users.

Why do you think that? 10.6 can run 64-bit binaries just fine. Just look
at any of the "OS X 10.6 debug" rows on Treeherder. Our OS X debug
builds are 64-bit only.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we make a plan to retire Universal Mac builds?

2015-08-05 Thread Ted Mielczarek
On Wed, Aug 5, 2015, at 06:59 PM, Matthew N. wrote:
> Assuming our FHR data is gathering correct data:
> 
> 1.5% of our OS X users are on x86. (There is no date on the dashboard 
> I'm looking at)
> 
> If we have data on CPU architecture I don't think the OS version is 
> relevant unless I'm missing something.

Thanks, that's very useful! I'd be interested to see exactly what we're
capturing there, but that can only be an upper bound on the number of
users affected. (It's possible to force an application to run as 32-bit
even if you have a 64-bit capable machine, so we may be overcounting
some users in that way.)

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I need to give my 2–coins–worth on this topic, please. (Re: Can we make a plan to retire Universal Mac builds?)

2015-08-12 Thread Ted Mielczarek
On Wed, Aug 12, 2015, at 03:56 AM, SciFi wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
> 
> 
> 
> 
> 
> Hello,
> 
> 
> I need to give my 2–coins–worth on this topic, please.
> 
> 
> If Mozilla decides to drop the 32–bit Mac users,
> then also drop the 32–bit Windows users
> and the 32–bit Linux users
> etc etc etc etc etc.
> 
> I bet you’d hear a HUGE CRY from these other groups.

Dropping 32-bit Linux would be a totally reasonable proposition.
Dropping 32-bit Windows would not--we haven't shipped an official 64-bit
Windows release yet, and a huge percentage of our users are using 32-bit
Windows, so they can't run a 64-bit Firefox. It's not the same issue at
all. Apple has shipped support for 64-bit applications since OS X 10.5,
and has actually dropped support for 32-bit in recent releases.

> So I don’t want us poor Mac users to be slighted, either.
> 
> I guess I need to be their ‘voice’ in this discussion.
> 
> So I request Mozilla to at least continue supporting the 32–bit Mac users
> until
> and only until
> a third–party group can deal out working code for them
> as mentioned earlier in this thread.
> This will also be applicable to other Firefox–based apps
> that are presently available for Mac users
> such as Thunderbird, SeaMonkey,
> etc.

Sorry, but that's just not how things work. If someone wants to maintain
a 32-bit Firefox build, like how Tenfourfox is maintaining a PPC Firefox
build, that's fine, but it doesn't mean Mozilla should have to keep
things working until such a group emerges.

> I use this iMac in 32–bit mode whenever it’s available within each app.
> There’s no need to try 64–bit mode when the hardware is designed with no
> more than 4–GB RAM entirely.
> (That’s the main reason the app called SixtyFour deals with,
>  
>  further explanations at that site)

So this transition wouldn't affect you then, you could simply run
Firefox as a 64-bit app. Just because you *choose* to run apps in 32-bit
mode doesn't mean you *have* to. I am sympathetic to users that can't
afford to upgrade their hardware, but "I want to run apps in 32-bit
mode" isn't a compelling argument.

Right now the best data we have shows that only 1.5% of our users are
running 32-bit Mac builds, so I think it's reasonable to drop support
for those users once we have the other blocking issues resolved.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MozPromises are now in XPCOM

2015-08-19 Thread Ted Mielczarek
On Tue, Aug 18, 2015, at 11:17 PM, Bobby Holley wrote:
> I gave a lightning talk at Whistler about MozPromise and a few other new
> tools to facilitate asynchronous and parallel programming in Gecko. There
> was significant interest, and so I spent some time over the past few
> weeks
> untangling them from dom/media and hoisting them into xpcom/.

This looks fantastic! Having dealt with threading in C++ Gecko code in
the past it's great to see tooling being built to alleviate some of the
overhead and reduce the potential for bugs. Great work!

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The opt-in FAIL_ON_WARNINGS has been replaced with the opt-out ALLOW_COMPILER_WARNINGS

2015-08-31 Thread Ted Mielczarek
On Mon, Aug 31, 2015, at 01:47 PM, Chris Peterson wrote:
> Should we hold third-party code to the same warning levels as Mozilla's 
> home-grown code? When we find warnings in third-party code, we typically 
> just suppress them because they weren't serious issues and fixing them 
> upstream is extra work. Sometimes upstream doesn't care or want the
> fixes.

This is hard because it means that pulling in an update from upstream
can lead to bustage that needs to be fixed. I've hit this when updating
Breakpad before and it adds an extra layer of annoyance to an already
annoying update process. We can certainly try to upstream warning fixes,
but we shouldn't make life harder for ourselves either.

> 
> In other projects I've worked on, such as closed-source commercial 
> projects or Chromium, third-party code has been "quarantined" in a 
> top-level vendor directory (called something like "third_party" [1]). 
> Having third-party code in one directory improves modularity and makes 
> it easier to audit code licenses and to identify and update outdated 
> libraries. In contrast, mozilla-central has third-party libraries 
> sprinkled throughout the tree and each library uses its a different 
> update process or script. It would be nice to share a common process and 
> script.

I filed a bug[1] a while ago about doing this. I think it'd be great to
be able to do `./mach update-thirdparty breakpad ` and have it do
the right thing.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1130343
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Disabling C++ tests by default?

2015-10-02 Thread Ted Mielczarek
On Fri, Oct 2, 2015, at 04:40 PM, Bobby Holley wrote:
> On Fri, Oct 2, 2015 at 1:03 PM, Ehsan Akhgari 
> wrote:
> 
> > On 2015-10-02 2:42 PM, Jonas Sicking wrote:
> >
> >> It might still mean that we can save time on tryserver if we only
> >> build these by default if the user has opted in to running the
> >> relevant tests.
> >>
> >> I agree with Gregory. I really don't see much value in building these
> >> binaries by default. For the people that use them often enough that
> >> they are worth having, adding a line to mozconfig is easy enough.
> >>
> >
> > There is one concrete advantage to building them, which is if your change
> > ends up breaking some of them, you'll know immediately and you can fix it
> > much faster than it you find out about it on the try server and/or on
> > inbound.
> >
> 
> +1. I often break gtests when refactoring code, and it would be super
> annoying to need to wait to find out about them on try. Given that this
> impacts me as a bystander and not someone who runs those tests all the
> time, I don't think that the right solution is for me to personally
> opt-in
> to the old behavior in all of my mozconfigs.

n.b., gps isn't talking about gtests, those already aren't built by
default unless you run them. He's talking about CPP_UNIT_TESTS.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Ted Mielczarek
On Fri, Oct 23, 2015, at 02:17 PM, Joshua Cranmer 🐧 wrote:
> Except that to demand contributors don't care about comm-central would 
> be to demand of your employees that they should be jerks to the wider 
> open-source community. Merging comm-central into mozilla-central, with 
> the exception of the time spent doing the actual merge work, would 
> reduce the amount of time that core contributors would have to spend 
> worrying about comm-central in the short and medium-terms for sure.

This is the most salient point to me--even with comm-central code in a
separate repository Mozilla employees still often try to do due
diligence to not break Thunderbird unnecessarily. Having the code in a
separate repository means they essentially always have to do *more*
work, even for trivial things like scriptable rewrites. I've had
situations where making Thunderbird work would be zero effort if it were
in m-c (since the code would be shared, like for build system work), but
I wind up breaking them because I didn't go above and beyond and clone
comm-central and duplicate my fix.

jcranmer is right. We already have lesser-supported and basically
unsupported code in the tree, this isn't going to make life any worse.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using the Taskcluster index to find builds

2015-12-02 Thread Ted Mielczarek
On Wed, Dec 2, 2015, at 09:52 AM, Benjamin Smedberg wrote:
> In order to correlate telemetry data, I need a time series of all 
> mozilla-central nightly builds (with version and buildid). It's 
> important that when there are multiple nightlies on a given date, that I 
> get all of them.

You should be able to list the namespaces under the nightly.
index, like:
```
$ curl -X POST
https://index.taskcluster.net/v1/namespaces/gecko.v2.mozilla-central.nightly.2015.12.02.revision
{
  "namespaces": [
{
  "namespace":
  
"gecko.v2.mozilla-central.nightly.2015.12.02.revision.470f4f8c2b2d6f82e56e161a4b05262c85f55b59",
  "name": "470f4f8c2b2d6f82e56e161a4b05262c85f55b59",
  "expires": "2016-12-01T00:00:00.000Z"
}
  ]
}
```

It's a POST because it allows you to pass a parameter for pagination of
results, but all the parameters are optional. That should give you a
list of all the changesets that were published as nightlies for that
date and then you can drill down to find individual builds.

> I'd also like to enhance that dashboard to construct regression range 
> links between particular builds. How would I go from a channel+buildid 
> (recorded in telemetry) to a revision number? Would that be for example 
> deconstructing the buildid and putting it back into the form 
> gecko.v2.mozilla-central.pushdate..MM.DD/BUILDID/firefox/ and then 
> reconstructing the "linux32-opt" path from the other telemetry data?

Is buildid really all you have in telemetry? If you have a changeset
then you can use
"gecko.v2.mozilla-central.nightly.revision..firefox.",
like:
```
$ curl
https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.nightly.revision.0010c0cb259e28faf764949df54687e3a21a2d0a.firefox.win32-opt
{
  "namespace":
  
"gecko.v2.mozilla-central.nightly.revision.0010c0cb259e28faf764949df54687e3a21a2d0a.firefox.win32-opt",
  "taskId": "O8opLQBnRhaJcSQ8sibuSQ",
  "rank": 1443818500,
  "data": {},
  "expires": "2016-10-02T14:28:03.535Z"
}
```

> 
> I'd like to recreate 
> http://bsmedberg.github.io/firefox-regression-range-finder/ using 
> something other than FTP scraping. How would I go from a known 
> build/revision to the previous/next nightly/aurora/beta build?
> 
> I'm feeling a bit stupid about the actual API: how does one go from a 
> "browse" URL such as 
> https://tools.taskcluster.net/index/artifacts/#gecko.v2.mozilla-central.nightly.2015.11.27.latest.firefox.win64-opt/gecko.v2.mozilla-central.nightly.2015.11.27.latest.firefox.win64-opt
>  
> to a machine-readable API URL? I imagine that tools like mozregression 
> have similar needs.

I fully agree that this is underdocumented! On that browse page, the
string in the text box is the index namespace, which you can send to the
index API's findTask[1] to get tasks out. For your example URL there,
that'd be:
```
$ curl
https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.nightly.2015.11.27.latest.firefox.win64-opt
 
{
  "namespace":
  "gecko.v2.mozilla-central.nightly.2015.11.27.latest.firefox.win64-opt",
  "taskId": "B3EdfRvWSBmgw9pVuGWQ_w",
  "rank": 1448618936,
  "data": {},
  "expires": "2016-11-26T16:33:17.089Z"
}
```

Once you have a taskId you can call listArtifacts[2] / getArtifact[3] on
the Queue to get its artifacts. I have some Python code[4] that uses the
Python taskcluster client[5] to do that if you're interested to see a
working example.


> 
> Can taskcluster directly provide version number/buildid/revision of a 
> particular nightly, would I have to fetch one of the artifacts like 
> buildprops.json or buildbut_properties.json to get that data? Looking 
> through buildprops, it has the buildid and revision but not the version 
> number. If I wanted to get this data for a whole range of builds, would 
> I have to fetch each file individually or is there a list/batch API? Are 
> either buildprops.json or buildbut_properties.json documented/stable 
> formats?

The index API docs say "Indexed Data, when a task is located in the
index you will get the taskId and an additional user-defined JSON blob
that was indexed with task. You can use this to store additional
information you would like to get additional from the index.", so we
could indeed store arbitrary data in there, but I don't think we're
doing that currently.

-Ted

1. http://docs.taskcluster.net/services/index/#findTask
2. http://docs.taskcluster.net/queue/api-docs/#listArtifacts
3. http://docs.taskcluster.net/queue/api-docs/#getArtifact
4.
https://github.com/luser/breakpad-taskcluster/blob/master/build-in-tc.py#L101
5. https://pypi.python.org/pypi/taskcluster
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to a) rewrite Gecko's encoding converters and b) to do it in Rust

2015-12-04 Thread Ted Mielczarek
On Fri, Dec 4, 2015, at 06:53 AM, Henri Sivonen wrote:
> Hi,
> 
> I have written a proposal to a) rewrite Gecko's encoding converters
> and b) to do it in Rust:
> https://docs.google.com/document/d/13GCbdvKi83a77ZcKOxaEteXp1SOGZ_9Fmztb9iX22v0/edit
> 
> I'd appreciate comments--especially from the owners of the uconv
> module and from people who have worked on encoding-related Rust code
> and on Rust code that needs encoding converters and is on track to be
> included in Gecko.

I don't really know anything about our encoding story, so I'll leave
that to others, but I'm generally in favor of writing new code in Rust
and replacing bits of Gecko with new Rust implementations. I don't know
that we've worked out all the kinks in including Rust code in Gecko
yet[1], but we're getting pretty close.

I have two questions:
1) What does Servo do, just use rust-encoding directly?
2) Instead of a clean-room implementation, would it be possible to fix
the problems you see with rust-encoding so that it's suitable for our
use? Especially if Servo is already using it, it would be a shame to
wind up with two separate implementations.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=oxidation
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to a) rewrite Gecko's encoding converters and b) to do it in Rust

2015-12-14 Thread Ted Mielczarek


On Sun, Dec 13, 2015, at 09:27 PM, Nicholas Nethercote wrote:
> On Sun, Dec 13, 2015 at 11:28 AM, Bobby Holley 
> wrote:
> >>
> >> I've been wondering about this. There's a big difference between (a)
> >> permitting Rust components (while still allowing fallback C++
> >> equivalents) and (b) mandating Rust components.
> >
> > I don't know why we would allow there to be a long gap between (a) and (b).
> > Maintaining/supporting two sets of the same code is costly. So if we get the
> > rust code working and shipping on all platforms, I can't think of a reason
> > why we wouldn't move as quickly as possible to requiring it.
> 
> The "if" in your second sentence is exactly what I'm worried about. My
> gut tells me that step (b) is a *lot* harder than step (a). I could be
> too pessimistic, but Android and the tier 3 platforms worry me.

The Rust team has been very supportive of meeting the needs that we (the
build folks on behalf of Gecko) have stated as requirements for enabling
Rust code everywhere. I'm quite confident that the Rust compiler already
supports targeting all of our Tier-1 platforms, it's just a matter of
getting things wired up in our production build environments.

We will definitely hit a point where we want to make Rust a hard
requirement for builds. This will likely cause some existing platforms
to no longer build. Obviously this isn't something we like to see, but
we shouldn't let the support of non-Tier 1 platforms guide our decision
making to that extent. Enabling Rust components in Gecko is important
work, and outweighs the value of supporting Firefox on minority
platforms. (Incidentally, the Rust compiler has been ported to other
platforms by community members, so this is not entirely out of the
question.)

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Status: ASan builds on Windows

2015-12-15 Thread Ted Mielczarek
On Tue, Dec 15, 2015, at 09:36 AM, Nicolas B. Pierron wrote:
> The crash reporter is currently disabled on ASan builds, we need to
> figure 
> out why, one hypothesis (I do not recall the author) was that we have
> issues 
> with the SEGV handler.

That was my hypothesis. I remember talking to decoder about this at some
point. ASan wants to handle exceptions internally, but Breakpad
registers a signal handler, and I think they don't play nice.

> 2.5/ Release Management.
> 
> ASan builds have a x2 overhead, and this implies that we have to ship 
> different binaries, ASan is not a simple toggle (as far as I know).
> 
> The performance impact is too high to ship ASan builds by default
> (Lawrence 
> Mandel).  And as this implies that we have to ship a new version of
> Firefox, 
> we would have to let people opt-in for a short while on nightly before 
> making them fallback to the normal nightly, or suggest this ASan builds
> on 
> supports.mozilla.org to investigate.

Additionally, ASan builds are incredibly different than what we're
shipping to users now--we'd be building them with an entirely different
toolchain. Since part of the point of nightly builds is to have users
testing the bleeding edge form of what we ship, this would take that
away. We could have an opt-in population, but we definitely could not
make them the default.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Does js-ctypes supports for nsISupports objects.

2015-12-23 Thread Ted Mielczarek
No, js-ctypes does not have any support for calling methods on C++
classes. In fact, that functionality was WONTFIXed a while back:
https://bugzilla.mozilla.org/show_bug.cgi?id=505907.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Non-tier 1 builders: NSPR usage?

2016-01-15 Thread Ted Mielczarek
Hello,

I'm interested in feedback from anyone out there that's doing builds on
non-Tier 1 platforms. Specifically, I want to know if you build
--with-system-nspr or not. I've got patches[1] to stop using NSPR's
autoconf build system in favor of moz.build files, but I've only made
them support our Tier 1 platforms currently. glandium suggested as a
fallback that on non-Tier 1 platforms we could have the build system
invoke NSPR's configure+make as usual, treating it more like the
--with-system-nspr case. I haven't implemented that, and I was curious
as to what configuration people are actually building in on those
platforms. The other option is to simply require --with-system-nspr on
platforms where our moz.build files don't support building NSPR, but if
folks aren't already doing that that's a bit more of a hassle.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1230117
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Taking screenshots of single elements (XUL/XULRunner)

2016-01-19 Thread Ted Mielczarek
On Tue, Jan 19, 2016, at 01:39 AM, m.bauermeis...@sto.com wrote:
> As part of my work on a prototyping suite I'd like to take screenshots
> (preferably retaining the alpha channel) of single UI elements. I'd like
> to do so on an onclick event.
> 
> Is there a straightforward way to accomplish this? Possibly with XPCOM or
> js-ctypes?

You can use the drawWindow method of CanvasRenderingContext2D:
https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/drawWindow

You just need to create a canvas element, call getContext('2d') on it,
and then calculate the offset of the element you want to screenshot.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of C++11 std::unique_ptr for the WOFF2 module

2016-02-01 Thread Ted Mielczarek
On Mon, Feb 1, 2016, at 04:29 AM, Frédéric Wang wrote:
> Dear all,
> 
> I'm trying to upgrade our local copy of OTS to version 5.0.0 [1]. OTS
> relies on the Brotli and WOFF2 libraries, whose source code we currently
> include in mozilla-cental.
> 
> I tried updating the source code of WOFF2 to the latest upstream
> version. Unfortunately, try server builds fail on OSX and mobile devices
> because the C++11 class std::unique_ptr does not seem to be available.
> IIUC some bugzilla entries and older threads on this mailing list, at
> the moment only some of the C++11 features are usable in the mozilla
> build system. Does any of the build engineer know whether
> std::unique_ptr can be made easily available? Or should we just patch
> the WOFF2 library to use of std::vector (as was done in earlier version)?

The biggest hurdle for us using C++11 features historically has been
stlport on Android/B2G. Nathan Froyd has investigated fixing this in the
past, but I don't know what the current status is.

-Ted

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rr chaos mode to find intermittent bugs

2016-02-10 Thread Ted Mielczarek


On Wed, Feb 10, 2016, at 03:04 PM, Robert O'Callahan wrote:
> Background:
> http://robert.ocallahan.org/2016/02/introducing-rr-chaos-mode.html
> 
> I just landed on rr master support for a "-h" option which enables a
> chaos
> mode for rr recording. This is designed to help reproduce intermittent
> test
> failures under rr. We already have a few reports of people using this
> successfully to find difficult bugs. Even though rr works only on desktop
> Linux (including VMs), I've reproduced a bug that only showed up in
> automation on Android, and khuey reproduced a bug that only showed up on
> OSX 10.6.
> 
> I'm continuing to do experiments to try to reproduce more of our top
> intermittents, but you may already find rr chaos mode useful. I recommend
> running a single test or a small group of tests continuously; one of my
> bugs only had a few failing runs out of a thousand. I'm sure there are
> still bugs rr can't reproduce, and I'm very interested in hearing about
> bugs that eventually get fixed but that rr was not able to reproduce. By
> studying such bugs we can improve rr chaos mode so it can find them.
> 
> Obviously, once rr chaos mode has proved itself, we should get some
> automation around it. I'd like a bit more experience with it before we
> have
> that discussion.

This is great! I've kept holding out hope that rr can help us fix
intermittent test failures, but so far we've failed to actually prove
this out. BenWa tried doing some work on this but kept getting hung up
on hitting test failures unrelated to the ones we see in production,
possibly due to environment issues. jmaher and armenzg and others have
been doing some great work lately standing up Linux tests in
Taskcluster, as a side effect of which means we now have a Docker image
for running Linux tests. If anyone wants to prototype reproducing
failures from CI running rr inside that image would be a good place to
start.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: rr chaos mode update

2016-02-16 Thread Ted Mielczarek
I've heard of other companies putting new tests into "quarantine" like
this, it's a reasonable idea.

I think the suggestion of running new tests under rr chaos mode is to
explicitly find timing bugs of the type that chaos mode exposes, which
tend to be hard to reproduce otherwise (but manifest as intermittent
failures).

-Ted

On Tue, Feb 16, 2016, at 11:38 AM, Nick Fitzgerald wrote:
> It seems like try/tbpl could automatically detect new test files and run
> them N times. That way, the developer doesn't have to do it manually, so
> it
> is less "intimidating" and also less likely to be skipped by accident or
> forgotten.
> 
> Running under rr would be nice, but even without rr this seems like it
> would be a nice addition to our testing infrastructure.
> 
> On Mon, Feb 15, 2016 at 11:59 PM, Axel Hecht  wrote:
> 
> > On 16/02/16 03:15, Kyle Huey wrote:
> >
> >> Seems like a good thing to expect developers to do locally today.
> >>
> >
> > Two concerns:
> >
> > What's the successs criteria here?
> >
> > Also, speaking as an occasional code contributor, newcomers and folks like
> > me will probably give up on contributing patches earlier.
> >
> > Axel
> >
> >
> >> - Kyle
> >>
> >> On Mon, Feb 15, 2016 at 6:08 PM, Justin Dolske 
> >> wrote:
> >>
> >> On 2/14/16 9:25 PM, Bobby Holley wrote:
> >>>
> >>> How far are we from being able to use cloud (rather than local) machine
> >>>
>  time to produce a trace of an intermittently-failing bug? Some one-click
>  procedure to produce a trace from a failure on treeherder seems like it
>  would lower the activation energy significantly.
> 
> 
> >>> And with that... At some point, what about having all *new* tests be
> >>> battle-tested by X runs of rr-chaos testing?  If it passes, it's allowed
> >>> to
> >>> run in the usual CI automation. If it fails, it's not (and you have a
> >>> handy
> >>> recording to debug).
> >>>
> >>> Justin
> >>>
> >>> ___
> >>> dev-platform mailing list
> >>> dev-platform@lists.mozilla.org
> >>> https://lists.mozilla.org/listinfo/dev-platform
> >>>
> >>>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Status Notifications for New Bugs in DOM, DevTools, and Hello

2016-03-04 Thread Ted Mielczarek
On Fri, Mar 4, 2016, at 07:48 AM, Philip Chee wrote:
> On 01/03/2016 03:25, Emma Humphries wrote:
> 
> > With the help of dedicated triage teams for each component, starting this
> > week when you file a bug against the DOM, Developer Tools, or Hello in
> > Bugzilla you’ll receive an email explaining the next steps in your bug’s
> > life.
> 
> I've been contributing code/patches and bug reports to Mozilla for ten
> years. I really don't need to be mansplained.
> 
> > The most important part of that mail will be the decision [2] the team has
> > made about your bug: that it’s urgent and will be fixed in an upcoming
> > release of Firefox, that it’s already being worked on (or will be soon); if
> 
> Of course it's being worked on. Why else would I attach a patch to the
> bug?

I think you could have more productively suggested this as an additional
constraint on the implementation--if someone files a bug and assigns it
to themselves, or files a bug with a patch, we can skip the triage step.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Setting property on the element is no longer working on Firefox 45

2016-03-10 Thread Ted Mielczarek
On Thu, Mar 10, 2016, at 01:23 PM, Devan Shah wrote:
> hello
> 
> When I set a custom property such as element.listofSomething = [] and
> then build the list and add it back to the same element. Then this
> element is passed to a function, now in that function I am no longer to
> access this property that I added to the function. 
> 
> Was there any sort of changes that would impact this?
> 
> Also if I make use of Element.prototype to set a custom variable and try
> to access it for an element it is not available any more. IS there
> something that I am missing. (note this is when inside a plugin)

FYI, I don't know what your particular bug is, but setting custom
properties on DOM elements is called "expandos", which might help you
file a more useful bug report:
https://developer.mozilla.org/en-US/docs/Glossary/Expando

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mapping Rust panics to MOZ_CRASH on non-Rust-created threads

2016-03-22 Thread Ted Mielczarek
On Tue, Mar 22, 2016, at 06:51 PM, Brian Smith wrote:
> On Tue, Mar 22, 2016 at 3:03 AM, Henri Sivonen 
> wrote:
> 
> > It seems that the Rust MP4 parser is run a new Rust-created thread in
> > order to catch panics.
> >
> 
> Is the Rust MP4 parser using panics for flow control (like is common in
> JS
> and Java with exceptions), or only for "should be impossible" situations
> (like MOZ_CRASH in Gecko)?
> 
> IMO panics in Rust should only be used for cases where one would use
> MOZ_CRASH and so you should configure the rust runtime to abort on
> panics.
> 
> I personally don't expect people to write correctly write unwinding-safe
> code—especially when mixing non-Rust and Rust—any more than I expect
> people
> to write exception-safe code (i.e. not at all), and so abort-on-panic is
> really the only acceptable configuration to run Rust code in.

I think I agree with this assessment. We'd just also like to make sure
that the specific way that the Rust code aborts triggers our Breakpad
exception handler, as we've had problems with this in the past (calling
abort() does not reliably do so, except in Gecko code where we override
the symbol), hence the repeated refrain of "MOZ_CRASH".

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Why is Mozreview hassling me about squashed commits?

2016-04-04 Thread Ted Mielczarek
On Sun, Apr 3, 2016, at 09:09 PM, L. David Baron wrote:
> On Saturday 2016-04-02 18:51 -0300, Eric Rescorla wrote:
> > 1. I write a bunch of code, committing along the way, so I have a lot of
> > commits named "Checkpoint" and "Fix bug" and the like.
> > 2. When it works, I push the code up to the review system for review.
> > 3. In response to review comments, I add a bunch more changes as new
> > commits and push them up the review system for review.
> > 4. Repeat 2 and 3 until I get r+
> > 5. Squash everything into one commit and land it.
> > 
> > Every time I do #3, it creates a new review request, but as you can see,
> > this doesn't have any meaningful connection to my local commits, which is a
> > good thing because while I want to keep my local history, I don't want it
> > to show up either in review or in the tree. This is also the way I want to
> > see patches because I want to review the whole thing at once.
> 
> This is why I use mq.  With mq, I maintain the sequence of
> changesets that are the logical units to be committed (and submitted
> for review), and I have the history of that sequence (in a
> version-controlled patch repository).
> 
> It's useful for reviewability and for bisection for the logical
> units that I commit to be small and (for review) understandable.
> And it's useful for me to have a history of the work I've done, for
> backups, for the ability to revert, and for the ability to remember
> what I did and why.
> 
> I still think this is a good model for doing development, despite
> the attempts of Mercurial developers to deprecate it.  I recognize
> that it's not the right tool for everybody, though.

FYI, using Mercurial with the "mutable-history" extension enabled does
preserve this information, as changesets that have been modified are
kept in the repository and marked `obsolete`. You can still find and
inspect them with normal Mercurial commands, although you may need to
add `--hidden` to get `hg log` to display them, like:
`hg log --hidden -r 'allprecursors(changeset)'`

That command will show you all the obsolete changesets that are older
versions of the changeset in question. I would like Mercurial to grow
better ways to inspect this data, like an equivalent to `hg log --graph`
to visualize the life of a changeset would be great.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Why is Mozreview hassling me about squashed commits?

2016-04-04 Thread Ted Mielczarek
On Mon, Apr 4, 2016, at 12:46 PM, Steve Fink wrote:
> I should clarify that by "non-mq", I really mean using mutable-history 
> aka evolve. And yes, my workflow does depend on some extensions, 
> including some local stuff that I haven't cleaned up enough to publish. 
> (As does my mq workflow; I had to write mqext to get painless versioning 
> of my patch queue.) So I'm not saying "just switch off from mq to evolve 
> and everything will be great"; rather I'm saying that with some effort 
> to get accustomed to the new workflow and some customization, the 
> evolve-based workflow can feel better than the mq-based workflow, even 
> if you've built up a set of workarounds and accommodations to mq's 
> weaknesses that you're comfortable with.

I fully agree with this. I switched over from mq to
bookmarks+mutable-history a while back and I'm now very happy with it. I
assume that anyone that's comfortable with the standard git feature
branch workflow would be happy with this as well, since it's very
similar.

I make a hg bookmark per thing I'm working on (sometimes with one thing
being a branch off of another, for dependent patches), and use `hg
amend` and `hg histedit` quite a bit. I rebase work-in-progress changes
as-needed (for landing or whenever). The nice thing of using bookmarks
vs. mq is that you always have the DAG from your changesets, so a rebase
is always a 3-way merge instead of blindly applying patches and fixing
up the rejects. I use KDiff3 for my merge tool and I'm reasonably happy
with it. I've maintained even large (20+) patch series in this manner
and haven't had issues. I've done the same with mq in the past and
rebasing was always miserable, and I would inevitably wind up with
patches in my queue that didn't apply and I didn't know the base rev
they applied against, so I'd wind up having to practically rewrite them
when I wanted to apply them in the future.

Additionally, the "stack of patches" model of mq just never fit my
actual workflow very well, where I always have multiple WIP patches in
my tree, some of which are dependent on others, but others being
independent. Having bookmarks lets me just commit things in the DAG the
way they fit together, so I can juggle all my WIP without problems.

> For me, the main thing is to stay oriented, and bookmarks plus an alias 
> to do the equivalent of hg log -r 'ancestors(.) and not public()' go a 
> long way towards recovering mq's "what am I working on?" qseries -v 
> affordances. Though I immediately want it to be something closer to hg 
> log -r 'ancestors(.) and not public()' -T '{node|short} {desc}\n' and 
> then I want color etc etc... Anyway, I'll document what I have at some 
> point, and other people have their own things as good or better.

I have two aliases that I find invaluable, `booklog` and `bookedit`:
```
[alias]
booklog = log -r "ancestors(.) & draft()"
bookedit = histedit "first(ancestors(.) & draft())"
```

If my working directory changeset is a WIP patch, I can `hg booklog` to
see just the changes that are only in my local tree ("draft" in
Mercurial parlance) that are ancestors of the current rev, so just the
current change and anything it depends on. `bookedit` is just a handy
alias for "histedit the WIP changesets starting at the working
directory", so I can easily squash or reorder patches that I'm currently
working on.

What I find myself doing a lot is committing things in the units that
make sense, then if I need to make changes later I just commit fixup
changesets on top, and then `hg bookedit` to fold those changes back
into the changesets where they make sense. If there's only one changeset
in play, then `hg amend` is simpler, obviously.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Build System Project - Update from the last 2 weeks

2016-04-08 Thread Ted Mielczarek
No. GCC *has* an LTO optimizer, but we're not using it. We're just doing
a PGO build. MSVC requires enabling LTO to use their PGO, so the
resulting build has both.

-Ted

On Fri, Apr 8, 2016, at 05:08 PM, Jeff Gilbert wrote:
> I thought Linux did LTO but not PGO?
> 
> On Tue, Apr 5, 2016 at 3:53 PM, Mike Hommey  wrote:
> > On Tue, Apr 05, 2016 at 09:02:09PM +0100, David Burns wrote:
> >> Below is a highlight of all work the build peers have done in the last 2
> >> weeks as part of their work to modernise the build infrastructure.
> >>
> >> Since the last report[1] a large number of improvements have landed in
> >> Mozilla Central.
> >>
> >> The build system now lazily installs test files. Before, the build copied
> >> tens of thousands of test and support files. This could take dozens of
> >> seconds on Windows or machines with slow I/O. Now, the build system defers
> >> installing test files until they are needed there (e.g. when running tests
> >> or creating test packages). Furthermore, only the test files relevant to
> >> the action performed are installed. Mach commands running tests should be
> >> significantly faster, as they no longer examine the state of tens of
> >> thousands of files on every invocation.
> >>
> >> After upgrading build machines to use VS2015, we have seen a decrease in
> >> build times[2] for PGO on Windows by around 100 minutes. This brings PGO
> >> times on Windows in line with that of PGO(Strictly speaking this is LTO)
> >> times on Linux.
> >
> > Just a nit: strictly speaking Windows builds do PGO+LTO, Linux builds do
> > PGO, but not LTO.
> >
> > Mike
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moving XP to ESR?

2016-04-19 Thread Ted Mielczarek
On Tue, Apr 19, 2016, at 04:14 PM, Nils Ohlmeier wrote:
> The good news is that dxr does not find anything using IsXPSP3OrLater().
> But this looks like we have a bit of version specific code in our tree:
> https://dxr.mozilla.org/mozilla-central/search?q=XP_WIN&redirect=false&case=true

FYI, the "XP" here means "cross-platform", and XP_WIN is set whenever
we're building for Windows.


> And on top of that come the costs for maintaining XP machines as part of
> the treeherder setup.
> 
> So it is not easy to quantify, but there is a “XP tax” we pay.

This is true. We hit this with toolchain support, we're actively jumping
through hoops to continue targeting XP with newer versions of MSVC.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can MAX_REFLOW_DEPTH be increased?

2016-05-02 Thread Ted Mielczarek
On Mon, May 2, 2016, at 12:51 PM, L. David Baron wrote:
> Do you happen to know what the main thread stack size is on the
> platforms that we run on?

On Windows/x86 it's 1MB, Windows/x86-64 it's 2MB, on Linux and OS X it's
8MB (all reserved, not committed, AIUI).

> One risk of such a change:  I'm not sure how good breakpad is at
> reporting crashes that result from stack overflows.  At the very
> least, I've seen crashes on Android that look like stack overflows,
> but didn't actually have a stack in the crash report (bug 1269013).

On Windows and Mac this should be fine. You can see Windows reports
easily:
https://crash-stats.mozilla.com/search/?reason=~EXCEPTION_STACK_OVERFLOW&_facets=signature&_columns=date&_columns=signature&_columns=product&_columns=version&_columns=build_id&_columns=platform#facet-signature

On Mac they are just reported as EXC_BAD_ACCESS  /
KERN_PROTECTION_FAILURE, I believe.

We fail to generate crash reports for stack overflows on Linux:
https://bugzilla.mozilla.org/show_bug.cgi?id=481781 . I've never quite
gotten around to investigating this, although upstream Breakpad seems to
handle them fine last I checked.

-TEd
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-05-03 Thread Ted Mielczarek
On Mon, May 2, 2016, at 08:26 PM, Chris Peterson wrote:
> We're tentatively planning to remove NPAPI support (for plugins other 
> than Flash) in 53 because 52 is the next ESR. We'd like ESR 52 to 
> support NPAPI as a transition option for enterprise users that rely on
> Java.

Then we should plan to drop Universal builds in the same release,
because without supporting 10.6 or 32-bit NPAPI plugins, the 32-bit half
of the build is just cruft. Is there a bug tracking NPAPI removal?

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Updating 32-bit Windows users to 64-bit Windows builds?

2016-05-12 Thread Ted Mielczarek
Hello,

Given all the discussion around SSE[2] lately, I was curious as to
whether we had made any plans to update Windows users that are running
32-bit Windows builds on a 64-bit Windows OS to our 64-bit Windows
builds. The 64-bit Windows builds do use SSE2, since that's a baseline
requirement for x86-64 processors, and the overall performance should
generally be better (modulo memory usage, I'm not sure if we have an
exact comparison). Additionally 64-bit builds are much less likely to
encounter OOM crashes due to address space fragmentation since they have
a very large address space compared to the maximum 4GB available to the
32-bit builds.

It does seem like we'd need some minimal checking here, obviously first
for whether the user is running 64-bit Windows, but also possibly
whether they use 32-bit plugins (until such time as we unsupport NPAPI).
32-bit plugins will not work on a 64-bit Windows Firefox (we do not have
the equivalent of Universal binaries like we do on OS X).

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Updating 32-bit Windows users to 64-bit Windows builds?

2016-05-12 Thread Ted Mielczarek
I suspect we'd want to add some extra token like "it's ok to update this
32-bit build to a 64-bit build", and have all the gating logic live on
the client-side. Odds are if we want to change the criteria we'd have to
change the client anyway.

-Ted

On Thu, May 12, 2016, at 12:56 PM, Ben Hearsum wrote:
> Do you have thoughts on how we'll be able to serve the users the correct 
> build if we have to base the decision on plugins they may have or other 
> information that's not in the update ping? We can already detect 32-bit 
> builds on 64-bit Windows through the build target, but information about 
> plugins or RAM is not something we know about when serving updates.
> 
> On 2016-05-12 11:45 AM, Ted Mielczarek wrote:
> > Hello,
> >
> > Given all the discussion around SSE[2] lately, I was curious as to
> > whether we had made any plans to update Windows users that are running
> > 32-bit Windows builds on a 64-bit Windows OS to our 64-bit Windows
> > builds. The 64-bit Windows builds do use SSE2, since that's a baseline
> > requirement for x86-64 processors, and the overall performance should
> > generally be better (modulo memory usage, I'm not sure if we have an
> > exact comparison). Additionally 64-bit builds are much less likely to
> > encounter OOM crashes due to address space fragmentation since they have
> > a very large address space compared to the maximum 4GB available to the
> > 32-bit builds.
> >
> > It does seem like we'd need some minimal checking here, obviously first
> > for whether the user is running 64-bit Windows, but also possibly
> > whether they use 32-bit plugins (until such time as we unsupport NPAPI).
> > 32-bit plugins will not work on a 64-bit Windows Firefox (we do not have
> > the equivalent of Universal binaries like we do on OS X).
> >
> > -Ted
> >
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Updating 32-bit Windows users to 64-bit Windows builds?

2016-05-12 Thread Ted Mielczarek
On Thu, May 12, 2016, at 12:22 PM, L. David Baron wrote:
> On Thursday 2016-05-12 11:45 -0400, Ted Mielczarek wrote:
> > requirement for x86-64 processors, and the overall performance should
> > generally be better (modulo memory usage, I'm not sure if we have an
> > exact comparison). Additionally 64-bit builds are much less likely to
> > encounter OOM crashes due to address space fragmentation since they have
> > a very large address space compared to the maximum 4GB available to the
> > 32-bit builds.
> 
> Might it be worth considering automatically updating users on
> machines with 6GB (roughly) or more of RAM, but leaving alone users
> with less RAM?

There are a number of axes on which this makes a difference. Certainly
OOM is one of them, and you're probably right that updating a 32-bit
user with less than 4GB of RAM to a 64-bit build would not help very
much there, since they would still run out of physical memory. (I'm not
sure how much of our virtual memory usage is mappings that could be
paged out, but at least all of our binaries are.) The axis that I was
thinking of most when writing the original email is "supporting SSE2
without having to jump through hoops", and I think that'd still be
valuable regardless of the amount of RAM available.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How should I measure responsiveness?

2016-05-17 Thread Ted Mielczarek
On Tue, May 17, 2016, at 05:13 PM, Jason Orendorff wrote:
> Hi everyone.
> 
> I'm trying to figure out how to measure the effects of a possible change
> Morgan Phillips is making to the Slow Script dialog.[1] One specific
> thing
> we want to measure is "responsiveness" in the few seconds after a user
> chooses to stop a slow script. Whatever "responsiveness" means.
> 
> We have some Telemetry probes that seem related to responsiveness[2][3],
> but I... can't really tell what they measure. The blurb on
> telemetry.mozilla.org is not always super helpful.
> 
> Also I'm probably missing stuff. I don't see anything related to
> frames-per-second while scrolling, for example.
> 
> Who knows more? Which Telemetry probes (or internal-to-Gecko
> measurements/stats) are relevant to what users call "responsiveness"? Is
> there one in particular that you've used in the past? Where can I get
> more
> info?

We have a few Talos tests that measure responsiveness:
https://treeherder.mozilla.org/perf.html#/graphs?series=%5Bmozilla-inbound,8438187f83ee3c0ca2da633f9ee6a0ed11c3f1ab,1,1%5D&series=%5Bmozilla-inbound,17776dbfe42bfdd7473bf1712f41ff2fdafcf4bb,1,1%5D

They use some code I wrote a while back that injects tracer events into
the native widget event loop and times how long they take to get
serviced:
https://dxr.mozilla.org/mozilla-central/source/toolkit/xre/EventTracer.cpp

I believe this is separate from those telemetry probes, though.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: pushPrefEnv/popPrefEnv/flushPrefEnv now return Promises

2016-05-20 Thread Ted Mielczarek
On Thu, May 19, 2016, at 07:09 PM, Matthew N. wrote:
> Hello,
> 
> One of the reasons developers have been avoiding pushPrefEnv compared to 
> the synchronous set*Pref (with a registerCleanupFunction) is because 
> pushPrefEnv required using a callback function to wait for the 
> preference change before moving on in the test file. This can make the 
> test flow more complicated (especially when using add_task) and 
> therefore harder to follow.
> 
> Bug 1197310[1] made pushPrefEnv/popPrefEnv/flushPrefEnv return a promise 
> which resolves when the callbacks would have been called so now you can 
> simply write test code like so:
> 
> add_task(function* setup() {
>yield SpecialPowers.pushPrefEnv({"set": [["signon.debug", true]]});
>…
> })

This is a really great improvement, thanks!

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Readable Bug Statuses in Bugzilla

2016-05-24 Thread Ted Mielczarek
On Tue, May 24, 2016, at 11:45 AM, Emma Humphries wrote:
> Last week the bugzilla.mozilla.org team had a work week in the San
> Francisco office. They were finishing the work on the modal edit view in
> Bugzilla, and joined them to land another new feature: Readable Statuses.
> 
> Bugs in bugzilla.mozilla.org have a lot of metadata, and it's often not
> immediately obvious what the state of a bug is. To help with that, I've
> written an *opinionated* package for npm that looks at the bug's metadata
> and returns a readable status message.

Hi Emma,

This sounds interesting! I looked at the npm page and the github repo,
but I didn't see any example output. I'm interested to know what the
readable statuses look like. Do you have a pointer to examples?

Thanks,
-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Common crashes due to MOZ_CRASH and MOZ_RELEASE_ASSERT

2016-06-01 Thread Ted Mielczarek
On Tue, May 31, 2016, at 09:26 PM, Jeff Gilbert wrote:
> On Tue, May 31, 2016 at 4:39 PM, Nicholas Nethercote
>  wrote:
> > On Wed, Jun 1, 2016 at 1:05 AM, Benjamin Smedberg  
> > wrote:
> >> You shouldn't need to annotate the file/line separately, because that is
> >> (or at least should be!) the top of the stack.
> >
> > Yes. Don't get hung up on the lack of annotations. It isn't much of a
> > problem; you can click through easily enough. I have filed bug 1277104
> > to fix the handful of instances that are showing up in practice, but
> > it'll only be a minor improvement.
> 
> Perhaps this isn't meant for me then? I looked at the query from the
> first post, but it's just noise to me. If it included the file that it
> crashed from, it would suddenly be very useful, since it'd then be
> trivial to see if there's something relevant to me.
> 
> As it stands now, the query alone doesn't seem useful to me. If it's
> meant to be useful to developers who write MOZ_CRASHes, this is a
> problem. If not, please ignore!
> 
> I would be extremely interested in MOZ_CRASHes and friends
> automatically getting bugs filed and needinfo'd. An index of
> crashes-by-file would get half-way there for me.

I believe at some point in the past we talked about trying to do a "top
crashes by bug component" view, but maintaining that mapping was hard.
It turns out that we're storing this data in moz.build files  nowadays
(for example[1]), and we even have a web service on hg.mozilla.org to
expose it for any given revision[2]. Unfortunately that web service is
currently broken[3], but gps tells me he's just been delaying fixing it
because there weren't any consumers complaining.

When the source file from the last frame of the stack used to generate
the signature points to hg.mozilla.org we could query that web service
to get a bug component for the file in question and put that in the
crash report, and expose it to queries. That would make it easy to get
lists of crashes by component, which I think would do what people here
are asking for. I filed bug 1277337 to track this idea.

-Ted

1.
https://dxr.mozilla.org/mozilla-central/rev/4d63dde701b47b8661ab7990f197b6b60e543839/dom/media/moz.build#7
2.
http://mozilla-version-control-tools.readthedocs.io/en/latest/hgmo/mozbuildinfo.html
3. https://bugzilla.mozilla.org/show_bug.cgi?id=1263973
4. https://bugzilla.mozilla.org/show_bug.cgi?id=1277337
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: All about crashes

2016-06-03 Thread Ted Mielczarek
On Wed, May 25, 2016, at 10:27 PM, Eric Rescorla wrote:
> - Making it so that certain kinds of defects still happen but they are
> safer.
>   For instance, in C writing dereferencing past the end of an array is
>   undefined behavior and may well cause something horrible, in Python
>   you get an exception, which, if not caught, causes program termination.
>   It's safer in the sense that it's unlikely to cause a security
> vulnerability,
>   but it's still a crash.

Right. There are two main reasons we track and fix crashes:
1) They are often a sign of potentially exploitable code, and we don't
want security holes in our product.
2) They create a bad user experience.

Moving code from C++ to JS or Rust improves our story around #1, since
it mitigates classes of vulnerabilities, but it still leaves us open to
#2, where we terminate the app unexpectedly and users are unhappy. I
think Rust does give us a better story here, since we can safely
restrict panics to individual threads, and as long as you don't unwrap
the thread result you don't have to propagate that panic across threads.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   >