On Wed, 2017-10-11 at 16:17 +0200, Marc Glisse wrote:
> On Wed, 11 Oct 2017, David Malcolm wrote:
>
> > On Wed, 2017-10-11 at 11:18 +0200, Paulo Matos wrote:
> > >
> > > On 11/10/17 11:15, Christophe Lyon wrote:
> > > >
> > > > You can have a look at
> > > > https://git.linaro.org/toolchain/gcc-
On Tue, 10 Oct 2017, Paulo Matos wrote:
> This is a suggestion. I am keen to have corrections from people who use
> this on a daily basis and/or have a better understanding of each status.
Not mentioning them (oddly I don't see anyone mentioning them)
makes me think you've not looked there so all
On Wed, 11 Oct 2017, Martin Sebor wrote:
> I don't have a strong opinion on the definition of a Regression
> in this context but I would very much like to see status changes
> highlighted in the test results to indicate that something that
There are lots of things that are useful *if* you have so
On Okt 10 2017, Joseph Myers wrote:
> Anything else -> FAIL and new FAILing tests aren't regressions at the
> individual test level, but may be treated as such at the whole testsuite
> level.
An ICE FAIL is a regression, but this is always a new test.
Andreas.
--
Andreas Schwab, SUSE Labs,
PASS -> ANY ; Test moves away from PASS
No, only a regression if the destination result is FAIL (if it's
UNRESOLVED then there might be a separate regression - execution test
becoming UNRESOLVED should be accompanied by compilation becoming FAIL).
If it's XFAIL, it might formally
On Wed, 11 Oct 2017, David Malcolm wrote:
On Wed, 2017-10-11 at 11:18 +0200, Paulo Matos wrote:
On 11/10/17 11:15, Christophe Lyon wrote:
You can have a look at
https://git.linaro.org/toolchain/gcc-compare-results.git/
where compare_tests is a patched version of the contrib/ script,
it calls
On Wed, 11 Oct 2017, Christophe Lyon wrote:
> * {PASS,UNSUPPORTED,UNTESTED,UNRESOLVED}-> XPASS
I don't think any of these should be considered regressions. It's good if
someone manually checks anything that's *consistently* XPASSing, to see if
the XFAIL should be removed or restricted to narro
On Wed, 11 Oct 2017, Paulo Matos wrote:
> On 10/10/17 23:25, Joseph Myers wrote:
> > On Tue, 10 Oct 2017, Paulo Matos wrote:
> >
> >> new test -> FAIL; New test starts as fail
> >
> > No, that's not a regression, but you might want to treat it as one (in the
> > sense that it's a re
On Wed, 2017-10-11 at 11:18 +0200, Paulo Matos wrote:
>
> On 11/10/17 11:15, Christophe Lyon wrote:
> >
> > You can have a look at
> > https://git.linaro.org/toolchain/gcc-compare-results.git/
> > where compare_tests is a patched version of the contrib/ script,
> > it calls the main perl script (
On 11 October 2017 at 07:34, Paulo Matos wrote:
> When someone adds a new test to the testsuite, isn't it supposed to not
> FAIL?
Yes, but sometimes it FAILs because the test is using a new feature
that only works on some targets, and the new test was missing the
right directives to make it UNSUPP
On 11/10/17 11:15, Christophe Lyon wrote:
>
> You can have a look at
> https://git.linaro.org/toolchain/gcc-compare-results.git/
> where compare_tests is a patched version of the contrib/ script,
> it calls the main perl script (which is not the prettiest thing :-)
>
Thanks, that's useful. I w
On 11 October 2017 at 11:03, Paulo Matos wrote:
>
>
> On 11/10/17 10:35, Christophe Lyon wrote:
>>
>> FWIW, we consider regressions:
>> * any->FAIL because we don't want such a regression at the whole testsuite
>> level
>> * any->UNRESOLVED for the same reason
>> * {PASS,UNSUPPORTED,UNTESTED,UNRE
On 11/10/17 10:35, Christophe Lyon wrote:
>
> FWIW, we consider regressions:
> * any->FAIL because we don't want such a regression at the whole testsuite
> level
> * any->UNRESOLVED for the same reason
> * {PASS,UNSUPPORTED,UNTESTED,UNRESOLVED}-> XPASS
> * new XPASS
> * XFAIL disappears (may me
On 11 October 2017 at 08:34, Paulo Matos wrote:
>
>
> On 10/10/17 23:25, Joseph Myers wrote:
>> On Tue, 10 Oct 2017, Paulo Matos wrote:
>>
>>> new test -> FAIL; New test starts as fail
>>
>> No, that's not a regression, but you might want to treat it as one (in the
>> sense that it's a
On 2017.10.11 at 08:22 +0200, Paulo Matos wrote:
>
>
> On 11/10/17 06:17, Markus Trippelsdorf wrote:
> > On 2017.10.10 at 21:45 +0200, Paulo Matos wrote:
> >> Hi all,
> >>
> >> It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:
> >>
> >> * 3 x86_64 workers from CF are now
On 10/10/17 23:25, Joseph Myers wrote:
> On Tue, 10 Oct 2017, Paulo Matos wrote:
>
>> new test -> FAIL; New test starts as fail
>
> No, that's not a regression, but you might want to treat it as one (in the
> sense that it's a regression at the higher level of "testsuite run should
On 11/10/17 06:17, Markus Trippelsdorf wrote:
> On 2017.10.10 at 21:45 +0200, Paulo Matos wrote:
>> Hi all,
>>
>> It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:
>>
>> * 3 x86_64 workers from CF are now installed;
>> * There's one scheduler for trunk doing fresh builds
On 2017.10.10 at 21:45 +0200, Paulo Matos wrote:
> Hi all,
>
> It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:
>
> * 3 x86_64 workers from CF are now installed;
> * There's one scheduler for trunk doing fresh builds for every Daily bump;
> * One scheduler doing incremen
On Tue, 10 Oct 2017, Paulo Matos wrote:
> ANY -> no test ; Test disappears
No, that's not a regression. Simply adding a line to a testcase will
change the line number that appears in the PASS / FAIL line for an
individual assertion therein. Or the names will change when e.g.
-std=c++2a
Hi all,
It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:
* 3 x86_64 workers from CF are now installed;
* There's one scheduler for trunk doing fresh builds for every Daily bump;
* One scheduler doing incremental builds for each active branch;
* An IRC bot which is curren
20 matches
Mail list logo