Hi Lawrence,
most (i would say 95 %) of the backouts are for Code issues - this include
bustages and test failures.
>From this Code Issues i would guess its about 2/3 for breaking tests and
1/3 Build Bustages.
The other backout reasons are merge conflicts / backout requests for
changes causes ne
One thing that we have also noticed is that the backout rate on autoland is
lower than inbound.
In the last 7 days backout rate is averaging (merges have been removed):
- Autoland 6%.(24 backouts out of 381 pushes)
- Inbound 12% (30 backouts out of 251 pushes)
I don't have graphs to show t
One large difference I see between autoland and mozilla-inbound is that on
autoland we have many single commits/push whereas mozilla-inbound it is
fewer. I see the Futurama data showing pushes and the sheriff report
showing total commits. Possibly there are some more data mining
opportunities :)
On 03/07/2017 05:29 AM, Ben Kelly wrote:
On Mon, Mar 6, 2017 at 5:42 PM, Nicholas Nethercote
wrote:
On Tue, Mar 7, 2017 at 9:22 AM, Ben Kelly wrote:
These measurements are for full content processes. Many of the processes
in the above do not need all the chrome script we load in content pr
Hi,
i agree with Sebastian - the integration of submitting to try and getting
the results in mozreview helps i think a lot.
Also this is all great feedback and will add a more detailed area about
backouts (where they happen like inbound/autoland, and reason) in the next
sheriff monthly report.
C
On 3/7/17 6:23 AM, David Burns wrote:
- Autoland 6%.(24 backouts out of 381 pushes)
- Inbound 12% (30 backouts out of 251 pushes)
Were those full backouts or partial backouts?
That is, how are we counting a multi-bug push to inbound where one of
the bugs gets backed out? Note that such
In recent months we have been triaging high frequency (>=30 times/week)
failures in automated tests. We find that we are fixing 35% of the bugs and
disabling 23% of them.
The great news is we are fixing many of the issues. The sad news is we are
disabling tests, but usually only after giving
This is just a rough number of how many pushes had a backout and how many
didn't. I don't have any data on whether this is a full or partial backout.
If there are multiple bugs in a push on inbound, a sheriff may revert the
entire push (or might not depending on how obvious the error is and
availa
On 3/7/17 9:26 AM, jma...@mozilla.com wrote:
We find that we are fixing 35% of the bugs and disabling 23% of them.
Is there a credible plan for reenabling the ones we disable?
-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https:
On Monday, March 6, 2017 at 6:08:09 PM UTC-5, Boris Zbarsky wrote:
> On 3/6/17 5:29 PM, Mike Hommey wrote:
> > You can get the builds through the taskcluster index.
>
> Does that have the same lifetime guarantees as archive.mozilla.org?
>
> -Boris
So, to be clear..
This thread is talking about
On Tuesday, March 7, 2017 at 10:37:12 AM UTC-5, Boris Zbarsky wrote:
> On 3/7/17 9:26 AM, jma...@mozilla.com wrote:
> > We find that we are fixing 35% of the bugs and disabling 23% of them.
>
> Is there a credible plan for reenabling the ones we disable?
>
> -Boris
Great question, we do not have
On Tue, Mar 7, 2017 at 3:26 PM, wrote:
> In recent months we have been triaging high frequency (>=30 times/week)
> failures in automated tests. We find that we are fixing 35% of the bugs
> and disabling 23% of them.
>
In case of mochitest browser tests failing on "This test exceeded the
timeout
On Tue, Mar 7, 2017 at 6:34 PM, Marco Bonardo
>
> In case of mochitest browser tests failing on "This test exceeded the
> timeout threshold", the temporary solution after 1 or 2 weeks should be to
> add requestLongertimeout,rather than disabling them. They should still be
> split up into smaller te
Thank for pointing that out. In some cases we have fixed tests that are
just timing out, in a few cases we disable because the test typically runs
much faster (i.e. <15 seconds) and is hanging/timing out. In other cases
extending the timeout doesn't help (i.e. a hang/timeout).
Please feel free t
Wonder if autoland's backout rate will go up when autoland is detached from
mozreview (letting someone autoland directly from bugzilla is in the plans,
I believe).
On Tue, Mar 7, 2017 at 4:47 AM, Carsten Book wrote:
> Hi,
>
> i agree with Sebastian - the integration of submitting to try and gett
I presume that when a test is disabled a bug is filed and triaged within
the responsible team as any regular bug. Only that way we don't forget
and push on fixing it and returning back to the wheel.
Are there also some data or stats how often tests having a strong orange
factor catch actual r
Can you add these details to the bug? We should probably take the
conversation on the best way to fix bustage there.
Given the fallout (read: memory regression tracking is gone) and, as you
noted, we have the ability to continue posting taskcluster builds to
archive.m.o, we should at least continu
On Tue, Mar 7, 2017 at 6:42 PM, Joel Maher wrote:
> Thank for pointing that out. In some cases we have fixed tests that are
> just timing out, in a few cases we disable because the test typically runs
> much faster (i.e. <15 seconds) and is hanging/timing out. In other cases
> extending the tim
However, we don't quite have that ability...
The taskcluster nightly stuff, which is posting to archive.m.o is
doing so with a small subset of dedicated machines, which are not
overly powerful either in disk space, networking throughput or cpu.
Its also doing so by downloading the artifacts from
Is there a mechanism in place to detect when disabled intermittent tests
have been fixed?
eg, every so often you could rerun disabled tests individually a bunch
of times. Or if you can distinguish which tests are failing, run them
all a bunch of times and pick apart the wreckage to see which o
On 3/7/17 1:33 PM, Honza Bambas wrote:
I presume that when a test is disabled a bug is filed
As far as I can tell, that's not the case...
If that were the case, that would be a good start, yes.
-Boris
___
dev-platform mailing list
dev-platform@lists
On 3/7/17 11:10 AM, jma...@mozilla.com wrote:
Do you have suggestions for how to ensure we keep up with the disabled tests?
Things that pop to mind are having "reenable this test" bugs filed, and
possibly trying to reenable every so often and seeing whether it's still
intermittent
-Bori
I'm on Linux (Arch), with ccache, and I work on mozilla-central, rebasing my
bookmarks on top of central every couple days.
And every couple days the recompilation takes 50-65 minutes.
Here's my mozconfig:
▶ cat mozconfig
mk_add_options MOZ_MAKE_FLAGS="-j4"
mk_add_options AUTOCLOBBER=1
ac_add_o
FWIW, the MOZ_MAKE_FLAGS bit can probably be removed, as I believe mach
will just choose the optimal number based on examining your processor cores.
On 2017-03-07 1:59 PM, zbranie...@mozilla.com wrote:
> mk_add_options MOZ_MAKE_FLAGS="-j4"
___
dev-platfo
Perhaps you need a faster computer(s). Are you building on Windows?
With icecream on Linux I can do a full clobber build in ~5 minutes.
-Jeff
On Tue, Mar 7, 2017 at 1:59 PM, wrote:
> I'm on Linux (Arch), with ccache, and I work on mozilla-central, rebasing my
> bookmarks on top of central ever
On Tue, Mar 7, 2017 at 8:05 PM, Mike Conley wrote:
> FWIW, the MOZ_MAKE_FLAGS bit can probably be removed, as I believe mach
> will just choose the optimal number based on examining your processor
> cores.
>From my experience the chosen value is too conservative, I think it
defaults to cpu_coun
On 3/7/17 2:05 PM, Mike Conley wrote:
FWIW, the MOZ_MAKE_FLAGS bit can probably be removed, as I believe mach
will just choose the optimal number based on examining your processor cores.
Except mach's definition of "optimal" is "maybe optimize for compile
throughput", not "optimize for doing a
On Tuesday, March 7, 2017 at 1:53:48 PM UTC-5, Marco Bonardo wrote:
> On Tue, Mar 7, 2017 at 6:42 PM, Joel Maher wrote:
>
> > Thank for pointing that out. In some cases we have fixed tests that are
> > just timing out, in a few cases we disable because the test typically runs
> > much faster (i.
On Tue, Mar 07, 2017 at 06:26:28AM -0800, jma...@mozilla.com wrote:
In March, we want to find a way to disable the teststhat are
causing the most pain or are most likely not to be fixed,
without unduly jeopardizing the chance that these bugs will be
fixed. We propose:
1) all high frequency (
On 03/07/2017 11:10 AM, Boris Zbarsky wrote:
On 3/7/17 2:05 PM, Mike Conley wrote:
FWIW, the MOZ_MAKE_FLAGS bit can probably be removed, as I believe mach
will just choose the optimal number based on examining your processor
cores.
Except mach's definition of "optimal" is "maybe optimize for
On Tuesday, March 7, 2017 at 1:59:14 PM UTC-5, Steve Fink wrote:
> Is there a mechanism in place to detect when disabled intermittent tests
> have been fixed?
>
> eg, every so often you could rerun disabled tests individually a bunch
> of times. Or if you can distinguish which tests are failing,
On Tue, Mar 7, 2017 at 8:11 PM, wrote:
> Thanks for checking up on this- there are 6 specific bugs that have this
> signature in the disabled set- in this case they are all linux32-debug
> devtools tests- we disabled devtools on linux32-debug because the runtime
> was exceeding in many cases 90 s
On Tue, Mar 07, 2017 at 02:14:48PM +0200, smaug wrote:
What you mean with "chrome script"? Any chrome JS?
There is frame script overhead per tab in child processes
(around 340kB on my machine) but then we have also tons of
jsms. I see 94 chrome compartments in a child process with one
tab. And
So,
I'm on Dell XPS 13 (9350), and I don't think that toying with MOZ_MAKE_FLAGS
will help me here. "-j4" seems to be a bit high and a bit slowing down my work
while the compilation is going on, but bearable.
I was just wondering if really two days of patches landing in Gecko should
result in
On Tue, Mar 7, 2017 at 2:29 PM, wrote:
> So,
>
> I'm on Dell XPS 13 (9350), and I don't think that toying with MOZ_MAKE_FLAGS
> will help me here. "-j4" seems to be a bit high and a bit slowing down my
> work while the compilation is going on, but bearable.
>
> I was just wondering if really tw
Good suggestion here- I have seen so many cases where a simple
fix/disabled/unknown/needswork just do not describe it. Let me work on a
few new tags given that we have 248 bugs to date.
I am thinking maybe [stockwell turnedoff] - where the job is turned off- we
could also ensure one of the last c
On Tue, Mar 07, 2017 at 11:15:54AM -0800, Kris Maglione wrote:
It would be nice if, rather than disabling the test, we could just
annotate so that it would still run, and show up in Orange Factor, but
wouldn't turn the job orange.
Which might be as simple as moving those jobs into a particular
On Tue, Mar 7, 2017 at 2:15 PM, Kris Maglione wrote:
>
> It would be nice if, rather than disabling the test, we could just
> annotate so that it would still run, and show up in Orange Factor, but
> wouldn't turn the job orange. And make sure someone is CCed on the bug to
> get the daily/weekly n
On Tue, Mar 07, 2017 at 02:38:38PM -0500, Joel Maher wrote:
On Tue, Mar 7, 2017 at 2:15 PM, Kris Maglione wrote:
It would be nice if, rather than disabling the test, we could just
annotate so that it would still run, and show up in Orange Factor, but
wouldn't turn the job orange. And make sure
I often wonder if unified builds are making things slower for folks who use
ccache (I assume one file changing would mean a rebuild for the entire
unified chunk), I'm not sure if there's a solution to that but it would be
interesting to see if compiling w/o ccache is actually faster at this point.
On 03/07/2017 11:34 AM, Joel Maher wrote:
Good suggestion here- I have seen so many cases where a simple
fix/disabled/unknown/needswork just do not describe it. Let me work on a
few new tags given that we have 248 bugs to date.
I am thinking maybe [stockwell turnedoff] - where the job is turned
On Tuesday, March 7, 2017 at 2:57:21 PM UTC-5, Steve Fink wrote:
> On 03/07/2017 11:34 AM, Joel Maher wrote:
> > Good suggestion here- I have seen so many cases where a simple
> > fix/disabled/unknown/needswork just do not describe it. Let me work on a
> > few new tags given that we have 248 bugs
On 3/7/2017 11:19 AM, Steve Fink wrote:
I have at times spun off builds into their own cgroup. It seems to
isolate the load pretty well, when I want to bother with remembering how
to set it up again. Perhaps it'd be a good thing for mach to do
automatically.
Then again, if dropping the -j count
On 3/7/2017 3:38 AM, Joel Maher wrote:
One large difference I see between autoland and mozilla-inbound is that on
autoland we have many single commits/push whereas mozilla-inbound it is
fewer. I see the Futurama data showing pushes and the sheriff report
showing total commits.
autoland also in
On 07/03/17 20:29, zbranie...@mozilla.com wrote:
> I was just wondering if really two days of patches landing in Gecko should
> result
> in what seems like basically full rebuild.
>
> A clean build takes 65-70, a rebuild after two days of patches takes 50-60min.
That seems pretty normal to me n
Since the integration of bug 1339081 [1] in Nightly, the storage has
been upgraded from version 1.0 to 2.0
This means that if you run an already upgraded profile (by current
Nightly) in an older version of Firefox, then any storage APIs that
use Quota Manager (especially IndexedDB and DOM cache
Are there any problems experienced by clients that downgrade to an older
version after their profile has been upgraded?
Thanks,
Robert
On Tue, Mar 7, 2017 at 2:32 PM, Jan Varga wrote:
> Since the integration of bug 1339081 [1] in Nightly, the storage has
> been upgraded from version 1.0 to 2.
>On 07/03/17 20:29, zbranie...@mozilla.com wrote:
>
>> I was just wondering if really two days of patches landing in Gecko should
>> result
>> in what seems like basically full rebuild.
>>
>> A clean build takes 65-70, a rebuild after two days of patches takes
>> 50-60min.
>
>That seems pretty n
On 07/03/17 23:43, Robert Strong wrote:
Are there any problems experienced by clients that downgrade to an
older version after their profile has been upgraded?
This major version change is downgrade-incompatible, so IndexedDB and
DOM cache won't work in an older version if their profile has be
On Wed, Mar 8, 2017, at 09:59 AM, Jan Varga wrote:
> On 07/03/17 23:43, Robert Strong wrote:
> > Are there any problems experienced by clients that downgrade to an
> > older version after their profile has been upgraded?
> >
> This major version change is downgrade-incompatible, so IndexedDB and
On Tue, Mar 07, 2017 at 11:29:00AM -0800, zbranie...@mozilla.com wrote:
> So,
>
> I'm on Dell XPS 13 (9350), and I don't think that toying with MOZ_MAKE_FLAGS
> will help me here. "-j4" seems to be a bit high and a bit slowing down my
> work while the compilation is going on, but bearable.
>
>
70 minutes is about what a clobber build takes on my Surface Book. And yes
I agree, it is way too much!
On Tue, Mar 7, 2017 at 3:24 PM, Mike Hommey wrote:
> On Tue, Mar 07, 2017 at 11:29:00AM -0800, zbranie...@mozilla.com wrote:
> > So,
> >
> > I'm on Dell XPS 13 (9350), and I don't think that t
On Tuesday, March 7, 2017 at 3:24:33 PM UTC-8, Mike Hommey wrote:
> On what OS? I have a XPS 12 from 2013 and a XPS 13 9360, and both do
> clobber builds in 40 minutes (which is the sad surprise that laptop CPUs
> performance have not improved in 3 years), on Linux. 70 minutes is way
> too much.
A
On Wed, Mar 08, 2017 at 10:09:52AM +1100, Xidorn Quan wrote:
> On Wed, Mar 8, 2017, at 09:59 AM, Jan Varga wrote:
> > On 07/03/17 23:43, Robert Strong wrote:
> > > Are there any problems experienced by clients that downgrade to an
> > > older version after their profile has been upgraded?
> > >
>
On Tue, Mar 07, 2017 at 03:50:56PM -0800, zbranie...@mozilla.com wrote:
> On Tuesday, March 7, 2017 at 3:24:33 PM UTC-8, Mike Hommey wrote:
> > On what OS? I have a XPS 12 from 2013 and a XPS 13 9360, and both do
> > clobber builds in 40 minutes (which is the sad surprise that laptop CPUs
> > perfo
I second Jeff's point about building with icecream[1]. If you work in
an office with a build farm, or near a fast desktop machine you can
pass jobs to, this makes laptop builds much more tolerable. Despite
the warnings on the mdn page, I do this over the wan as well. It's a
lot slower than when I'm
On Tue, Mar 7, 2017 at 6:09 PM, Xidorn Quan wrote:
> > This major version change is downgrade-incompatible, so IndexedDB and
> > DOM cache won't work in an older version if their profile has been
> > upgraded.
> > IndexedDB is also used internally, so stuff that depends on it likely
> > won't wor
On Tue, Mar 07, 2017 at 08:02:47PM -0500, Ben Kelly wrote:
As an example of why "backup the db" is harder than it sounds, you would
need to backup the entire storage subsystem. If you lose data in IDB, but
don't lose data in other sub-systems like cache API, then an origin can
find itself in a c
On 2017-03-07 7:09 PM, Mike Hommey wrote:
While talking about this... I think it's about time we had an actual
plan for data cleanup.
Last week, when the cloudflare thing happened, I went through the files
in my profile looking for all my password-manager-managed passwords and
the domains associ
On 3/7/17 4:25 PM, Chris Peterson wrote:
Can you just nice mach?
I seem to recall trying that and it not helping enough (on MacOS) with
the default "use -j8 on a 4-core machine" behavior. YMMV based on OS,
ratio of RAM to cores, and whatnot.
-Boris
_
I recommend that instead of classifying intermittents as tests which fail > 30
times per week, to instead classify tests that fail more than some threshold
percent as intermittent. Otherwise on a week with lots of checkins, a test
which isn't actually a problem could clear the threshold and caus
61 matches
Mail list logo