Re: What is a regression?

2007-10-23 Thread Paolo Bonzini


I think this is a very important point.  If it didn't block a previous 
release, it shouldn't block the current release. It doesn't mean it 
shouldn't get looked at, but it also shouldn't be a blocker.  I think 
the high priority regressions should be ones that are new to 4.3 because 
they have clearly been either introduced or exposed by this release and 
need to be dealt with.


It might happen that a bug is triggered more easily in 4.3 than it is in 
4.1 or 4.2, but an artificial test case can be constructed that fails on 
all three releases.  See for example PR32004, where comment#16 has a 
testcase failing on 4.1/4.2 too.


Paolo


Re: What is a regression?

2007-10-23 Thread David Fang

I still think that is too strong a position. A good fraction
of compiler time is spent bugging out user code.. one could
even say the job of a compiler is not generating machine code,
but telling programmers they're idiots :)


Every compiler version I've tried has been telling me this for years. 
When can we expect some *positive* feedback from compilers? 
"Congratulations, your code is less of a spaghetti-mess than it was last 
revision, keep up the good work."  I smell a request for enhancement...


Fang


Fwd: [Announcing OpenMP 3.0 draft for public comment]

2007-10-23 Thread Tobias Burnus
For those interested in OpenMP.

Tobias

-- Forwarded Message --
From: Meadows, Lawrence F 
Date: Sun Oct 21 19:12:10 PDT 2007
Subject: [Omp] Announcing OpenMP 3.0 draft for public comment

21 October 2007

The OpenMP ARB is pleased to announce the release of a draft of Version
3.0 of the OpenMP specification for public comment. This is the first
update to the OpenMP specification since 2005.

This release adds several new features to the OpenMP specification,
including:
*   Tasking: move beyond loops with generalized tasks and support
complex and dynamic control flows.

*   Loop collapse: combine nested loops automatically to expose more
concurrency

*   Enhanced loop schedules: Support aggressive compiler
optimizations of loop schedules and give programmers better runtime
control over the kind of schedule used.

*   Nested parallelism support: better definition of and control
over nested parallel regions, and new API routines to determine nesting
structure


Larry Meadows, CEO of the OpenMP organization, states: "The creation of
OpenMP 3.0 has taken very hard work by a number of people over more than
two years. The introduction of a unified tasking model, allowing
creation and execution of unstructured work, is a great step forward for
OpenMP. It should allow the use of OpenMP on whole new classes of
computing problems."


The draft specification is available in PDF format from the
Specifications section of the OpenMP ARB website:

http://www.openmp.org

(Direct link:
http://www.openmp.org/drupal/mp-documents/spec30_draft.pdf)

Mark Bull has led the effort to expand the applicability of OpenMP while
improving it for its current uses as the Chair of the OpenMP Language
Committee. He states: "The OpenMP language committee has done a fine job
in producing this latest version of OpenMP.  It has been difficult to
resolve some tricky details and understand how tasks should propagate
across the language.  But I think we have come up with solid solutions,
and the team should be proud of their accomplishment."

The ARB warmly welcomes any comments, corrections and suggestions you
have for Version 3.0. For Version 3.0, we are soliciting comments
through an on-line forum, located at http://www.openmp.org/forum. The
forum is entitled Draft 3.0 Public Comment. You can also send email to
feedback at openmp.org if you would rather not use the forum. It is most
helpful if you can refer to the page number and line number where
appropriate.

The public comment period will close on 31 January 2008



Re: Optimization of conditional access to globals: thread-unsafe?

2007-10-23 Thread Andrew Haley
Tomash Brechko writes:
 > On Mon, Oct 22, 2007 at 18:48:02 +0100, Andrew Haley wrote:
 > > Err, not exactly.  :)
 > > 
 > > See http://www.hpl.hp.com/personal/Hans_Boehm/c++mm/why_undef.html
 > 
 > Why, I'd say that page is about original races in the program, not
 > about what compiler should do with races that it introduces itself.
 > 
 > Still, "let's wait and see" is probably the best outcome that I can
 > expect from this discussion, so thanks anyway. ;)

It'll be interesting to see, when the draft recommendation is
published, whether your example would have been correct.

It will, to say the least, be nice to have a proper standard for the
memory model, so that we never have to have "is this pthreads program
defined or not?" arguments ever again.  :-)

Andrew.


Re: What is a regression?

2007-10-23 Thread skaller

On Tue, 2007-10-23 at 03:05 -0400, David Fang wrote:
> > I still think that is too strong a position. A good fraction
> > of compiler time is spent bugging out user code.. one could
> > even say the job of a compiler is not generating machine code,
> > but telling programmers they're idiots :)
> 
> Every compiler version I've tried has been telling me this for years. 
> When can we expect some *positive* feedback from compilers? 
> "Congratulations, your code is less of a spaghetti-mess than it was last 
> revision, keep up the good work."  I smell a request for enhancement...

So you and your compiler are off to a co-dependency workshop? 

-- 
John Skaller 
Felix, successor to C++: http://felix.sf.net


Re: modified x86 ABI

2007-10-23 Thread Andi Kleen
Mark Shinwell <[EMAIL PROTECTED]> writes:

> - 64-bit arguments are aligned on 64-bit boundaries -- which may mean
>   that padding is inserted beneath them (for example if there is a
>   32-bit argument aligned to a 64-bit boundary beneath the 64-bit
>   argument).  No more padding than is required is inserted.

If you want to make it usable for Linux kernel based system you would
need to change alignof(long long) to 4 matching i386. 

Otherwise you cannot use the existing 32bit emulation layer and would
need a new one (this is much more than just a standard C library; it
also covers all ioctls and all system calls)

Same is likely true for the BSDs and Solaris at least.

-Andi


Re: What is a regression?

2007-10-23 Thread Gabriel Dos Reis
Jason Merrill <[EMAIL PROTECTED]> writes:

| But in any case, nobody has code that relies on getting an error from
| a previous version of the compiler that would be broken by moving to
| 4.3. Only regressions on valid code seem serious enough to me to
| warrant blocking a release.

I strongly agree.

-- Gaby


How to avoid common subexpression elimination under this situation?

2007-10-23 Thread Bingfeng Mei
Hello,
I am porting GCC4.2.1 to our 2-issue VLIW processor and encounter the
following problem.

Source code:
#define MIN(a, b)  (a:
  D.1510 = snr + (short int *) ((unsigned int) toneIx * 2);
  *D.1510 = (short int) ((short unsigned int) *D.1510 + 5);
  return;

}


Note that D.1510 is extracted as a common expression:

The final assembly code looks like this:

addw r1, r1, r1
addw r0, r0, r1
ldh r1, [r0]
addh r1, r1, #0x5
sbl [link]  :   sth r1, [r0]


Actually we should have generated the following code:

ldh r1, [r0, r1 << 1]
addh r1, r1, #0x5
sbl [link]  :   sth r1, [r0, r1 << 1]

Not only save two cycles but two instructions.


The trouble is with the subexpression elimination in the tree
optimization. The
simple address offset is supported by the instruction set with no extra
cost.
In RTL level, it is difficult to reverse the optimization. In our 3.4.6
-based porting, the GCC actually generates the latter code. How to
avoid CSE under such situation? Any suggestion is greatly appreciated.


Cheers,
Bingfeng Mei
Broadcom UK



Re: How to avoid common subexpression elimination under this situation?

2007-10-23 Thread Paolo Bonzini



In RTL level, it is difficult to reverse the optimization. In our 3.4.6
-based porting, the GCC actually generates the latter code. How to
avoid CSE under such situation? Any suggestion is greatly appreciated.


You are probably not defining the ADDRESS_COST or (if you have no 
ADDRESS_COST hook at all) the RTX_COSTS hook properly.  Addressing mode 
selection is done at RTL-level.  In 4.2.1 the RTL CSE pass will undo the 
CSE (yes, I know it's confusing!...) if the hooks are implemented 
properly; in 4.3.0 it will be fwprop who does the CSE undoing.


Paolo


Problem when build glibc on IA64

2007-10-23 Thread 袁立威
I'm a guy working on IA64 and I need to compile glibc with gcc4.2.

I tried gcc version 4.2.2 to build glibc 2.5, 2.6 and 2.7, all failed with:

internal compiler error: RTL flag check: INSN_DELETED_P used with
unexpected rtx code 'plus' in output_constant_pool_1, at varasm.c:
3393

I also tried gcc verision 4.2.1, get this message too.

I search this error with google and find this is a bug already fixed
by gcc, so I'm confused.

If I'm wrong or miss understand something, please tell me.

Any suggestion is a pleasure.

Thanks and regards.


[RFC PING] INSN attribute to enable/disable alternatives

2007-10-23 Thread Andreas Krebbel
Hi Ian,

have you had time to look at this? Or does anyone else like to
comment?

http://gcc.gnu.org/ml/gcc/2007-10/msg00092.html

Bye,

-Andreas-


Is peephole2 expansion recursive?

2007-10-23 Thread Brian Dominy
Is the new RTL of a define_peephole2 substitution subject to further
peepholing?  From the code, it appears the answer is no.  The
internals doc doesn't say.

Thanks,
Brian


Re: Optimization of conditional access to globals: thread-unsafe?

2007-10-23 Thread Paul Brook
On Monday 22 October 2007, Robert Dewar wrote:
> Erik Trulsson wrote:
> > It is also worth noting that just declaring a variable 'volatile' does
> > not help all that much in making it safer to use in a threded environment
> > if you have multiple CPUs.  (There is nothing that says that a multi-CPU
> > system has to have any kind of automatic cache-coherence.)
>
> The first sentence here could be misleading, there are LOTS of systems
> where there is automatic cache-coherence, and of course the use of
> 'volatile' on such systems does indeed help. If you are working on
> a systemn without cache-coherence, you indeed have big problems, but
> that's rarely the case, most multi-processor computers in common use
> do guarantee cache coherence.

IMHO the statement is correct, but the justification is incorrect.

While most multiprocessor machines do provide cache coherence, many do not 
guarantee strict ordering of memory accesses.  In practice you need both for 
correct operation. i.e. some form of explicit synchronisation is required on 
most modern SMP systems.

Hardware cache coherence just makes this much quicker/easier to implement.
To a first approximation you need a pipeline flush rather than a cache flush.

Paul


Re: What is a regression?

2007-10-23 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Jason Merrill <[EMAIL PROTECTED]> writes:
> 
>> I think that the release process for recent releases has given undue
>> priority to bugs marked as regressions.  I agree that it's important
>> for things that worked in the previous release to keep working in the
>> new release.  But the regression tag is used for much more trivial
>> things.
> 
> We had a discussion of these sorts of issues at the GCC mini-summit
> back in April.  We didn't come to any conclusions.

There's a misconception in this thread that I need to address as the RM:
the notion of "release-blocker".  We don't really have release blockers,
because we don't have a way of making sure things get fixed.  If I
declare a bug a release blocker, we may just never have another release.
 Companies can declare something a release-blocker because they can then
direct resources to go work on the issue.  We don't have that luxury.
When I look at the bug list immediately before making a release, I scan
the P1s and decide if I can live with myself after I push the button.

When I mark a PR as "P1", that means "This is a regression, and I think
it's embarrassing for us, as a community, to have this bug in a
release."  Unfortunately, every release goes out with P1 bugs open, so
we can't really call them "release blockers".  My judgment isn't always
great, and it's certainly not final: I'm willing for people to suggest
that P1s be downgraded.  I've suggested that people do that by putting a
note in the PR, and CC'ing me.

I agree that PR32252 (the example given by Jason) should not be P1.
I've downgraded it to P2.  (P2 means "regression users will notice on a
major platform, but not P1".  P3 means "not yet looked at".  Things like
ICE-after-valid-error are marked P4.  Things utterly outside the release
criteria are P5.)

One of the consistent problems with GCC has been that we disregard the
experience of users that aren't power users.  However, I disagree that
these things should be extremely low priority.  Compiler ICEs send
message that what we're producing is not of high quality so I think we
should take them seriously.

I do also think we should start weeding out some of the P2s that have
been around a long time.  If GCC 2.95 didn't ICE, but GCC 3.0+ have all
ICE'd, then, at this point, it's not much of a regression any more; it's
just an ICE that's "always" been there.

I do think that we should take regressions, in general, very seriously
-- especially cases where we generate wrong code.  One of the consistent
complaints I have heard is that people fear GCC upgrades and the
perception is that this is worse than with other compilers.  Certainly,
I've talked to other compiler vendors who claim that they consider any
wrong-code generation bug a release blocker.

Now, all that said, of course I think that other bugs are important too,
and I'm all for fixing them.  But, in terms of looking at a release and
deciding how ready-to-go it is, I think regressions are as reasonable a
measure as any.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


register variables: list?

2007-10-23 Thread skaller
In

http://gcc.gnu.org/onlinedocs/gcc-4.2.2/gcc/Explicit-Reg-Vars.html#Explicit-Reg-Vars

it explains how to use register variables .. but doesn't list them.

Is there a document somewhere which lists

a) each CPU macro name
b) all the registers supported

?

I need to get the stack pointer when __builtin_frame_address(0) isn't
working .. on amd64 this:


register void *stack asm ("%rsp");

appears to work. Also this is the current stack pointer .. not
the frame pointer, which could be different.

-- 
John Skaller 
Felix, successor to C++: http://felix.sf.net


RE: register variables: list?

2007-10-23 Thread Dave Korn
On 23 October 2007 18:25, skaller wrote:

> In
> 
>
http://gcc.gnu.org/onlinedocs/gcc-4.2.2/gcc/Explicit-Reg-Vars.html#Explicit-Re
g-Vars
> 
> it explains how to use register variables .. but doesn't list them.
> 
> Is there a document somewhere which lists
> 
> a) each CPU macro name

  Don't at all understand what you're referring to there.  Macro?  You mean a
#define?  Name of the cpu?  I don't see anything referring to the cpu name in
the explicit reg vars docs.

> b) all the registers supported

  Nope, but it should be the standard notation as defined by the manufacturer
in their reference manuals and as accepted by gas for the target in question.
If you absolutely have to know for certain, look at
gcc/config// for the various *REGISTER_NAMES* macros
(same as what -ffixed-REGNAME accepts).

> I need to get the stack pointer when __builtin_frame_address(0) isn't
> working .. on amd64 this:
> 
> 
> register void *stack asm ("%rsp");
> 
> appears to work. 

  Yep.  I'd recommend using it like so:

#define STACK_POINTER_VALUE ({ register void *stack asm ("%rsp"); stack; })

just for safety's sake... you wouldn't want to go and accidentally use it as
an lvalue, now, would you?  :-O

> Also this is the current stack pointer .. not
> the frame pointer, which could be different.

  Yep, almost inevitably so I'm afraid.  Maybe a better solution than making
__builtin_frame_address(0) return an artificial address in functions where
there is no frame pointer would be just to have a
__builtin_initial_frame_pointer_offset() function that returns the size of the
frame, but even that wouldn't help if you're in the middle of a function call
sequence or something where gcc is pushing args to the stack.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: What is a regression?

2007-10-23 Thread Jason Merrill

Mark Mitchell wrote:

When I mark a PR as "P1", that means "This is a regression, and I think
it's embarrassing for us, as a community, to have this bug in a
release."  Unfortunately, every release goes out with P1 bugs open, so
we can't really call them "release blockers".  My judgment isn't always
great, and it's certainly not final: I'm willing for people to suggest
that P1s be downgraded.  I've suggested that people do that by putting a
note in the PR, and CC'ing me.


OK, thanks for clarifying that.  Are the P1s the only thing you consider 
 when deciding whether or not to make a release?



I agree that PR32252 (the example given by Jason) should not be P1.
I've downgraded it to P2.  (P2 means "regression users will notice on a
major platform, but not P1".  P3 means "not yet looked at".  Things like
ICE-after-valid-error are marked P4.  Things utterly outside the release
criteria are P5.)


How do non-regressions fit into this scheme?


One of the consistent problems with GCC has been that we disregard the
experience of users that aren't power users.  However, I disagree that
these things should be extremely low priority.  Compiler ICEs send
message that what we're producing is not of high quality so I think we
should take them seriously.


Absolutely.  But not as seriously as wrong-code or 
ice-on-valid/rejects-valid bugs, IMO.



I do also think we should start weeding out some of the P2s that have
been around a long time.  If GCC 2.95 didn't ICE, but GCC 3.0+ have all
ICE'd, then, at this point, it's not much of a regression any more; it's
just an ICE that's "always" been there.


Yes.


I do think that we should take regressions, in general, very seriously
-- especially cases where we generate wrong code.  One of the consistent
complaints I have heard is that people fear GCC upgrades and the
perception is that this is worse than with other compilers.  Certainly,
I've talked to other compiler vendors who claim that they consider any
wrong-code generation bug a release blocker.


I absolutely agree for regressions on valid code.  But regressions on 
invalid code, or on valid code that we've never accepted, shouldn't 
affect people upgrading.  That's the distinction I want to make.



Now, all that said, of course I think that other bugs are important too,
and I'm all for fixing them.  But, in terms of looking at a release and
deciding how ready-to-go it is, I think regressions are as reasonable a
measure as any.


I think the term "critical regressions" you've used in past status 
updates (wrong-code, ice-on-valid-code, rejects-valid regressions) are a 
good measure, but not other regressions; non-critical regressions should 
not have higher priority than serious non-regression bugs.  We have had 
a bunch of C++ wrong-code bugs hanging around for a long time, and I 
don't think ice-on-invalid-code regressions should take priority over those.


I was thinking of a default priority scheme like

new/uncategorized bugs are P0.
critical regressions from the previous release series are P1.
other critical regressions and wrong-code bugs are P2.
non-critical regressions and ice-on-valid-code/rejects-valid bugs are P3.
most other bugs are P4.
enhancements are P5.

adjusting priorities up or down based on how often the bug is 
encountered, or how much work it is to fix.


Jason


Re: What is a regression?

2007-10-23 Thread Joe Buck
On Tue, Oct 23, 2007 at 02:20:24PM -0400, Jason Merrill wrote:
> Mark Mitchell wrote:
> >When I mark a PR as "P1", that means "This is a regression, and I think
> >it's embarrassing for us, as a community, to have this bug in a
> >release."  Unfortunately, every release goes out with P1 bugs open, so
> >we can't really call them "release blockers".  My judgment isn't always
> >great, and it's certainly not final: I'm willing for people to suggest
> >that P1s be downgraded.  I've suggested that people do that by putting a
> >note in the PR, and CC'ing me.
> 
> OK, thanks for clarifying that.  Are the P1s the only thing you consider 
>  when deciding whether or not to make a release?
> 
> >I agree that PR32252 (the example given by Jason) should not be P1.
> >I've downgraded it to P2.  (P2 means "regression users will notice on a
> >major platform, but not P1".  P3 means "not yet looked at".  Things like
> >ICE-after-valid-error are marked P4.  Things utterly outside the release
> >criteria are P5.)
> 
> How do non-regressions fit into this scheme?

I would like to see more attention paid to wrong-code bugs that aren't
marked as "regression".  Doesn't mean they should necessarily be P1, but
we tend to ignore them.


help in deciphering a call RTL instruction

2007-10-23 Thread Sunzir Deepur
Hi list,

I have a need to understand some call RTL instructions,
but I have difficulties to understand some of them.

Here are two examples of the challenging RTL instructions:

(call (mem:QI (symbol_ref:SI (\"stpcpy\") [flags 0x41] ) [0 S1 A8])
(const_int 8 [0x8]))

Q: does this instruction call the function stpcpy or __builtin_stpcpy ?

 (call (mem:QI (symbol_ref:SI (\"check_match.7758\") [flags 0x3] ) [0 S1 A8])
(const_int 0 [0x0]))

Q: does this instruction call the function check_match.7758 or check_match ?


In general, why do we have somtimes calls like that which (seemingly) have
different callee targets ?

Thank You!
sunzir


Re: GCC 4.1.1 unwind support for arm-none-linux-gnueabi

2007-10-23 Thread Daniel Jacobowitz
On Tue, Oct 23, 2007 at 09:54:55AM +0800, Franklin wrote:
> Hi, list.
> 
> Right now I'm building new toolchain using old one provided by our vendor.  I 
> have built binutils and gcc-4.1.1 successfully.  However while building 
> glibc-2.4 it always told me:
> 
> running configure fragment for nptl/sysdeps/pthread
> checking for forced unwind support... no
> configure: error: forced unwind support is required

I suspect you are not using the ports repository, or have otherwise
not got an adequate glibc configuration.  Try the latest release of
glibc and be sure to include ports in --enable-add-ons.

> I saw that gcc could be built with unwind support, but default turned off.

LIBUNWIND is not related to this error message.


-- 
Daniel Jacobowitz
CodeSourcery


Re: help in deciphering a call RTL instruction

2007-10-23 Thread Eric Botcazou
> Here are two examples of the challenging RTL instructions:
>
> (call (mem:QI (symbol_ref:SI (\"stpcpy\") [flags 0x41]  0x401f000 0 __builtin_stpcpy>) [0 S1 A8])
> (const_int 8 [0x8]))
>
> Q: does this instruction call the function stpcpy or __builtin_stpcpy ?

The compiler will emit a call to stpcpy in the assembly file (modulo further 
symbol mangling).  You cannot really call __builtin_stpcpy anyway since it's 
a builtin.

>  (call (mem:QI (symbol_ref:SI (\"check_match.7758\") [flags 0x3]
> ) [0 S1 A8])
> (const_int 0 [0x0]))
>
> Q: does this instruction call the function check_match.7758 or check_match
> ?

Same as above, the pure RTL part gives the symbol name.

> In general, why do we have somtimes calls like that which (seemingly) have
> different callee targets ?

In the former case, because it's a builtin.  More generally, the RTL back-end 
is allowed to mangle symbol names at its pleasure.  You can display the RTL 
associated with a FUNCTION_DECL by invoking debug_tree on it from within GDB.

-- 
Eric Botcazou


Re: help in deciphering a call RTL instruction

2007-10-23 Thread Revital1 Eres

>  (call (mem:QI (symbol_ref:SI (\"check_match.7758\") [flags 0x3]
 0x404a3e80 check_match>) [0 S1 A8])
> (const_int 0 [0x0]))
>
> Q: does this instruction call the function check_match.7758 or
check_match ?

I think that when we do function specialization/cloning (for the IPA
constant propagation pass for instance) we also create a new name for
the new version of the function such that instead of calling to the
original function we call the new version.

Revital




RE: register variables: list?

2007-10-23 Thread skaller

On Tue, 2007-10-23 at 18:44 +0100, Dave Korn wrote:
> On 23 October 2007 18:25, skaller wrote:
> 
> > In
> > 
> >
> http://gcc.gnu.org/onlinedocs/gcc-4.2.2/gcc/Explicit-Reg-Vars.html#Explicit-Re
> g-Vars
> > 
> > it explains how to use register variables .. but doesn't list them.
> > 
> > Is there a document somewhere which lists
> > 
> > a) each CPU macro name
> 
>   Don't at all understand what you're referring to there.  Macro?  You mean a
> #define?  Name of the cpu?  I don't see anything referring to the cpu name in
> the explicit reg vars docs.

Sorry, I thought it was obvious.. I want to write:

#if defined(AMD64)
register void *stack asm ("%rsp");
#elsif defined(X86)
register void *stack asm ("%esp");
#elsif defined (SPARC)
register void *stack asm ("%stack");

#else
#error "CPU NOT SUPPORTED"
#endif

Hmm .. actually I just checked in more depth how Boehm does this,
and it is clever enough to avoid all this:

int * nested_sp()
{
int dummy;
return(&dummy);
}

Ouch .. I should have thought of that! And he's using
setjmp to save registers plus other hacks I can steal.. :)

However I may want to reserve a register, so I still
need a list with ABI details (which ones are caller-saved).

-- 
John Skaller 
Felix, successor to C++: http://felix.sf.net


Re: What is a regression?

2007-10-23 Thread Mark Mitchell
Jason Merrill wrote:
> Mark Mitchell wrote:
>> When I mark a PR as "P1", that means "This is a regression, and I think
>> it's embarrassing for us, as a community, to have this bug in a
>> release."  Unfortunately, every release goes out with P1 bugs open, so
>> we can't really call them "release blockers".

> OK, thanks for clarifying that.  Are the P1s the only thing you consider
>  when deciding whether or not to make a release?

Roughly speaking, yes.  Of course, if someone were to raise an issue
with me in some other way, then I would consider that too.  But,
generally, I look at the open P1s to determine whether or not quality is
at an acceptable level.

>> I agree that PR32252 (the example given by Jason) should not be P1.
>> I've downgraded it to P2.  (P2 means "regression users will notice on a
>> major platform, but not P1".  P3 means "not yet looked at".  Things like
>> ICE-after-valid-error are marked P4.  Things utterly outside the release
>> criteria are P5.)
> 
> How do non-regressions fit into this scheme?

They don't.  Historically (well, as long as I can remember), anything
that was a non-regression was automatically P5.

That's not to say that there are no important non-regression bugs.  But,
my feeling has been that if the bug was not a regression, then we've had
a demonstrably useful compiler up until now, so it wasn't something that
I needed to worry about for the next release.

>> these things should be extremely low priority.  Compiler ICEs send
>> message that what we're producing is not of high quality so I think we
>> should take them seriously.
> 
> Absolutely.  But not as seriously as wrong-code or
> ice-on-valid/rejects-valid bugs, IMO.

Totally agreed.

In theory, the "cost" of a bug is some product-like function of its
prevalence (i.e., the number of people who run into it) and its severity
(i.e., the impact it has on those that encounter it).  I do try to make
that judgment in some cases: if I see a bug that looks unlikely to
affect anyone in the real world, then I might not mark it P1, even
though it's a wrong-code regression.  But, prevalence is hard to
measure, so, in general, I mark wrong-code regressions as P1.

> I absolutely agree for regressions on valid code.  But regressions on
> invalid code, or on valid code that we've never accepted, shouldn't
> affect people upgrading.  That's the distinction I want to make.

That's a fair point and that's why I downgraded the issue you mentioned
from P1 to P2.

> I think the term "critical regressions" you've used in past status
> updates (wrong-code, ice-on-valid-code, rejects-valid regressions) are a
> good measure, but not other regressions; non-critical regressions should
> not have higher priority than serious non-regression bugs.  We have had
> a bunch of C++ wrong-code bugs hanging around for a long time, and I
> don't think ice-on-invalid-code regressions should take priority over
> those.

I think we need to agree on what the priority scheme is for.  In a
commercial setting, we might have bug-killing droids who came in each
morning, picked the highest priority bug off the list, and zapped it.
But, we don't -- we have people who pick whatever is of interest to
them.  So, I've used the priority scheme as a way to tell me how well
we're doing relative to the previous release.

In other words, I'm not trying to suggest that if you want to maximize
goodness of the compiler you pick the highest priority bug and fix it.
Then again, if you fix a P1, you're probably doing a good thing.  When
we run out of P1s, come see me. :-)  We can fight over whether to work
on the P2s, or something else.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713