> On 28 Aug 2025, at 16:10, Jeff Law via Gcc <gcc@gcc.gnu.org> wrote:
> 
> 
> 
> On 8/28/25 8:09 AM, Richard Earnshaw (lists) wrote:
>> On 28/08/2025 15:01, Iain Sandoe wrote:
>>> 
>>> 
>>>> On 28 Aug 2025, at 14:36, Jeff Law via Gcc <gcc@gcc.gnu.org> wrote:
>>>> 
>>>> 
>>>> 
>>>> On 8/28/25 5:42 AM, Richard Biener via Gcc wrote:
>>>>> On Thu, Aug 28, 2025 at 1:12 PM Rainer Orth via Gcc <gcc@gcc.gnu.org> 
>>>>> wrote:
>>>>>> 
>>>>>> Hi Sam,
>>>>>> 
>>>>>>> When a test fails with 'excess errors', there's often only one actual
>>>>>>> error (an excess "(error|warning|note):") and it'd be nice to not have
>>>>>>> to dig in the .log files to fish that out.
>>>>>> 
>>>>>> I think such a move would be a bad mistake.  Consider ICEs where you
>>>>>> have something like
>>>>>> 
>>>>>> FAIL: gcc.c-torture/compile/pr35318.c   -O0  (test for excess errors)
>>>>>> Excess errors:
>>>>>> /vol/gcc/src/hg/master/local/gcc/testsuite/gcc.c-torture/compile/pr35318.c:9:1:
>>>>>>  error: unrecognizable insn:
>>>>>> (insn 13 25 26 2 (parallel [
>>>>>>             (set (reg:DF 10 %o2 [orig:112 x ] [112])
>>>>>>                 (asm_operands/v:DF ("") ("=r,r") 0 [
>>>>>>                         (reg:SI 11 %o3 [orig:112 x+4 ] [112])
>>>>>>                         (mem/c:DF (plus:SI (reg/f:SI 30 %fp)
>>>>>>                                 (const_int -24 [0xffffffffffffffe8])) [3 
>>>>>> %sfp+-24 S8 A64])
>>>>>> [and many more lines...]
>>>>>> 
>>>>>> This would clutter the output beyond recognition, especially if this is
>>>>>> a torture test which is run at several optimization options.
>>>>>> 
>>>>>> Excess errors, like all others, always require further investigation.
>>>>>> In my experience, digging the full error messages from the .log files is
>>>>>> usually the smallest part of that.  You often even have to rerun the
>>>>>> compilation manually to also get the parts that are filtered out by the
>>>>>> prune procs.
>>>>> I find the classification this would provide useful, like I like the
>>>>> (internal compiler error) classification we already have.  I would
>>>>> of course not duplicate all of th eabove message but only
>>>>> '(unrecognizable insn)' in the above case.
>>>> It's not the only consideration, but keep in mind that such output is not 
>>>> stable and will cause some headaches with scripting that compares two 
>>>> summary files.
>>> 
>>> This is something that I think we want to tackle anyway ..
>>>> 
>>>> Even with the bit of instability to the line number in the ICE message, I 
>>>> do find the ICE classification useful as well.
>>> 
>>> … the classification is useful, but the false positive “new fail / old fail 
>>> went away” pairs are a real nuiscance .. hopefully we can have some 
>>> brainstorming @cauldron about ways to deal with this (e.g. fuzzy matching 
>>> or some smart way to discard the twinkling line numbers).
>>> 
>> Well really, the compare-tests script should report duplicate results as a 
>> problem as well, since
>> PASS: abcd
>> ...
>> PASS: abcd
>> is just a dup pass/fail waiting to happen.
> Yup.  A duplicate testname should be reported.  These cause major headaches 
> if one passes, but the other fails -- it looks like a regression to the 
> comparison scripting we have.
> 
> Getting to the point where every test has a unique name would be good on many 
> levels.  I can't help but think back to QMTest which tried to solve the 
> enumeration problem along with others.  I wasn't too supportive at the time, 
> but in retrospect, that was probably a mistake.

FWIW I was (and still am) about to volunteer to start a BoF on improving the 
testsuite output (with the infra we currently have) - there are some concrete 
ideas - but it probably needs some brainstorming too.   This is distinct from 
any BoF on improving test coverage or “CI” in general  … seems like there 
should be some interest in such a discussion.

Iain

> 
> jeff

Reply via email to