gcc regression on Darwin

2007-04-01 Thread Dominique Dhumieres
While regtesting my build of the 20070330 snapshot (Darwin7), I got a lot
(~100) of regressions: gcc.c-torture/execute/builtins/memcpy-chk.c, ...,
gcc.c-torture/execute/built-in-setjmp.c. Looking at the list, I have found 
that this has started on 20070325 for Darwin8:

http://gcc.gnu.org/ml/gcc-testresults/2007-03/msg01225.html

Note that the errors are all of the following kind:

output is:
/var/tmp//cceFaqh1.s:2000:non-relocatable subtraction expression, 
"L006$pb" minus "LSJR191"
/var/tmp//cceFaqh1.s:2000:symbol: "LSJR191" can't be undefined in a subtraction 
expression
...

I tried to see if this regression has been reported, but found nothing.
Did I miss something or should I fill a PR? If yes what would be the best
format?

TIA

Dominique



Re: gcc regression on Darwin

2007-04-01 Thread Andreas Schwab
[EMAIL PROTECTED] (Dominique Dhumieres) writes:

> If yes what would be the best format?

See .

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


RE: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Dave Korn
On 01 April 2007 07:08, ganesh subramonian wrote:

> Hi
>  I work in a company where we have been using gcc-2.95.4 (based cross
>  compiler) for compiling our code. Most of the code is written in c++
>  and makes extensive use of the stl libraries. We would not be changing
>  our operating system or processor architecture (so portability is not
>  a very good reason to give). There seems to be a lot of changes since
>  gcc-2.95 as a result of which we get a large number of errors when
>  trying to compile the code with gcc-3.x.

  Yes, this is known.  The C++ language standard was still changing in the
2.95->3.x timescale, and GCC moved a lot closer to strict conformance.  See

http://gcc.gnu.org/gcc-3.4/changes.html#cplusplus

  You may find that using a 3.3 series compiler requires a good deal less
rewriting than a 3.4 series compiler; which version did you try?

>  If we were to put in quite some effort to get the entire code-base
>  compiling in gcc-3.x, what advantages would we get as a result of this
>  (in terms of code-size reduction or faster execution of compiled
>  binaries). Are there any new glibc,libstdc++ (or other libraries)
>  features which are there in 3.x/4.x which are unavailable in 2.95?
>  If you were asked to convince someone about moving from gcc-2.95 to
>  gcc-3.x how would you people do it?

  Well, one of the advantages would be that you'd end up fixing all the
non-standardisms in your code base, which will be of long term benefit to the
maintainability and portability of your codebase, but hard to express in
figures.  As time goes by, new employees in your firm are going to know
standard C++ and find some of your old code confusing; mistakes could arise if
you keep all your old code without ever brushing it up a bit.

  New features: the STL implementation in 3.3 is far more complete, correct
and efficient than whatever 2.95 had.  I don't know much else off the top of
my head, I'm not a C++ expert.  For other changes and improvements, you should
browse the release announcement/changelist pages (such as the one linked
above)at the gcc website, they'll be much more comprehensive than anything I
can say here.

  As to code size reduction and better code generation overall: yes, you can
expect to gain from moving up to gcc-3.  It has more and better optimisations,
it has old bugs fixed compared to 2.95.  Of course, there are also conceivably
new bugs; no change is entirely without risk.  The problem is, there's no hard
numbers we can offer you; the effect of adding better optimisations very much
depends on the form and structure of your codebase and how much opportunity
there is for such optimisations to be made.

  The only real thing that would make your case watertight would be for you to
actually do some porting work, and build at least part of your codebase with a
new version of the compiler, and run profiling and tests on the generated
code: get actual hard measurements of code size and runtimes.  It's even
conceivable you might find the old compiler generating better output for your
particular code.  That might shift the balance of your cost-benefit argument
back toward hanging on with your old compiler, perhaps until there was more
resource available to put some time into refactoring your code to play better
with a newer compiler version.  It's a hard call, there are always a lot of
uncertainties in changing some piece of your infrastructure, which is why
people's instinct is often to not fix what they can't see is visibly broke;
that's why real hard numbers are most valuable in trying to call it.





cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Paul Brook
On Sunday 01 April 2007 12:01, Dave Korn wrote:
> On 01 April 2007 07:08, ganesh subramonian wrote:
> > Hi
> >  I work in a company where we have been using gcc-2.95.4 (based cross
> >  compiler) for compiling our code. Most of the code is written in c++
> >  and makes extensive use of the stl libraries. We would not be changing
> >  our operating system or processor architecture (so portability is not
> >  a very good reason to give). There seems to be a lot of changes since
> >  gcc-2.95 as a result of which we get a large number of errors when
> >  trying to compile the code with gcc-3.x.
>
>   Yes, this is known.  The C++ language standard was still changing in the
> 2.95->3.x timescale, and GCC moved a lot closer to strict conformance.  See
>
> http://gcc.gnu.org/gcc-3.4/changes.html#cplusplus
>
>   You may find that using a 3.3 series compiler requires a good deal less
> rewriting than a 3.4 series compiler; which version did you try?

If you're already switching compilers, moving to an already obsolete release 
(3.3) seems a strange choice. At this point I'd recommend skipping 3.x 
altogether and going straight to gcc4.1/4.2.

Many of the improvements in c++ code generation were as a result of tree-ssa, 
you only get with 4.x.

Paul


RE: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Dave Korn
On 01 April 2007 12:59, Paul Brook wrote:

> On Sunday 01 April 2007 12:01, Dave Korn wrote:
>> On 01 April 2007 07:08, ganesh subramonian wrote:
>>> Hi
>>>  I work in a company where we have been using gcc-2.95.4 (based cross
>>>  compiler) for compiling our code. Most of the code is written in c++
>>>  and makes extensive use of the stl libraries. We would not be changing
>>>  our operating system or processor architecture (so portability is not
>>>  a very good reason to give). There seems to be a lot of changes since
>>>  gcc-2.95 as a result of which we get a large number of errors when
>>>  trying to compile the code with gcc-3.x.
>> 
>>   Yes, this is known.  The C++ language standard was still changing in the
>> 2.95->3.x timescale, and GCC moved a lot closer to strict conformance.  See
>> 
>> http://gcc.gnu.org/gcc-3.4/changes.html#cplusplus
>> 
>>   You may find that using a 3.3 series compiler requires a good deal less
>> rewriting than a 3.4 series compiler; which version did you try?
> 
> If you're already switching compilers, moving to an already obsolete release
> (3.3) seems a strange choice. At this point I'd recommend skipping 3.x
> altogether and going straight to gcc4.1/4.2.
> 
> Many of the improvements in c++ code generation were as a result of
> tree-ssa, you only get with 4.x.

  It is however a bigger step change, and a correspondingly bigger risk.
There are arguments in favour of not running with the bleeding edge when what
you want is simply a stable production compiler that will build your own
particular codebase.  It might be worth doing a three-way comparison of
generated code size and performance to give some idea of what extra benefits
were attached to those extra risks.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Marcin Dalecki


Wiadomość napisana w dniu 2007-04-01, o godz13:58, przez Paul Brook:

If you're already switching compilers, moving to an already  
obsolete release

(3.3) seems a strange choice. At this point I'd recommend skipping 3.x
altogether and going straight to gcc4.1/4.2.

Many of the improvements in c++ code generation were as a result of  
tree-ssa,

you only get with 4.x.


I wouldn't recommend it. One has to adapt gradually to the patience  
required to

use the later compiler editions.

➧ Marcin Dalecki ❖




Re: How can I get VRP information for an RTX?

2007-04-01 Thread Richard Guenther

On 4/1/07, David Daney <[EMAIL PROTECTED]> wrote:

I am looking at how the MIPS backend handles division.  For the compiler
configuration in question (mipsel-linux) division operations trap on
division by zero.  This is handled in mips_output_division in mips.c
where we unconditionally emit a conditional trap.

I would like to change it so that if the divisor is not zero, the
conditional trap would be omitted.  I am looking at this small test program:

int divtest(int x, int y)
{
if (y == 0)
return 45;
else
return x / y;
}

If I set a breakpoint in mips_output_division, I can print out the
operands for the division operation:
(gdb) p operands[2]
$6 = (rtx) 0x2e155aa0
(gdb) pr
(reg/v:SI 5 $5 [orig:196 y ] [196])

Q1: Is it possible to get to the VRP information from this rtx?  How?

If the VRP information is available, it would be nice to |define_expand
for the division/conditional trap and perhaps let the compiler schedule
the trap.  However the define_expand documentation states that the
condition for the define_expand cannot depend on the data in the insn
being matched.  That would seem to imply that the expansion cannot be
dependent on the VRP information.

Q2:  Is that correct, and why?

Q3: Would it be better to do this at the tree level instead of rtl?


At the moment none of this is possible because VRP information is
not retained after the VRP pass, nor is it carried over to RTL.  The
easiest possibility I see is to allow marking *_DIV_EXPR as
TREE_THIS_NOTRAP (as we currently can do only for memory
references) and translate this to a scheme following the rtl
MEM_NOTRAP_P flag.

I believe this should be possible right now apart from ensuring the
flag doesn't get lost somewhere as they are not documented to be used
for divisions.

Richard.


Re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Richard Guenther

On 4/1/07, Marcin Dalecki <[EMAIL PROTECTED]> wrote:


Wiadomość napisana w dniu 2007-04-01, o godz13:58, przez Paul Brook:

> If you're already switching compilers, moving to an already
> obsolete release
> (3.3) seems a strange choice. At this point I'd recommend skipping 3.x
> altogether and going straight to gcc4.1/4.2.
>
> Many of the improvements in c++ code generation were as a result of
> tree-ssa,
> you only get with 4.x.

I wouldn't recommend it. One has to adapt gradually to the patience
required to
use the later compiler editions.


At least you'd have the chance that reported bugs may eventually get
fixed - with a 3.x (or even 4.0.x) release there's no chance of that unless
you are willing to pay (and find) someone to do it.

Richard.


Re: error: "no newline at end of file"

2007-04-01 Thread Martin Michlmayr
We have some real numbers about these new errors now.  I've compiled
the whole Debian archive in the last week for Gelato to test GCC 4.3
on IA64.  Out of just slightly under 7000 packages in Debian, we have
the following new failures:

missing newline: 42
error: "xxx" redefined: 33
extra tokens at end of #else directive: 9

undefined reference: because of the change of the meaning of "inline": 4
multiple definition: probably due to the change of "inline", linking
against:
 Apache: 1
 libc6: 12
 glib: 11

C++ header cleanup: 370 (I'll start filing bugs on packages...)
first argument of 'int main(unsigned int, char* const*)' should be 'int: 3
error: changes meaning of: 68
bugs I still need to investigate: 77

With regards to the pedwarnings, I suggest the following:

 - The "no newline" and "xxx redfined" pedwarnings should be converted
   into normal warnings.  Rationale: no newline doesn't harm anyone and
   there are quite a few programs that would fail because of an error
   that many agree is too strict.  Redefining something with -D seems
   like an useful feature and again quite a few applications do this.
 - "extra tokens at end of #else directive": this is easy enough to
   fix and only few programs do this.  Let's keep this is as a pedwarn.

If people agree, I'll check in the following patch.  OK?

Index: libcpp/macro.c
===
--- libcpp/macro.c  (revision 123380)
+++ libcpp/macro.c  (working copy)
@@ -1622,11 +1622,11 @@
 
   if (warn_of_redefinition (pfile, node, macro))
{
- cpp_error_with_line (pfile, CPP_DL_PEDWARN, pfile->directive_line, 0,
+ cpp_error_with_line (pfile, CPP_DL_WARN, pfile->directive_line, 0,
   "\"%s\" redefined", NODE_NAME (node));
 
  if (node->type == NT_MACRO && !(node->flags & NODE_BUILTIN))
-   cpp_error_with_line (pfile, CPP_DL_PEDWARN,
+   cpp_error_with_line (pfile, CPP_DL_WARN,
 node->value.macro->line, 0,
 "this is the location of the previous definition");
}
Index: libcpp/lex.c
===
--- libcpp/lex.c(revision 123380)
+++ libcpp/lex.c(working copy)
@@ -854,7 +854,7 @@
{
  /* Only warn once.  */
  buffer->next_line = buffer->rlimit;
- cpp_error_with_line (pfile, CPP_DL_PEDWARN, 
pfile->line_table->highest_line,
+ cpp_error_with_line (pfile, CPP_DL_WARN, 
pfile->line_table->highest_line,
   CPP_BUF_COLUMN (buffer, buffer->cur),
   "no newline at end of file");
}

-- 
Martin Michlmayr
http://www.cyrius.com/


Re: error: "no newline at end of file"

2007-04-01 Thread Zack Weinberg

Martin Michlmayr wrote:
...

- The "no newline" and "xxx redfined" pedwarnings should be converted
  into normal warnings.  Rationale: no newline doesn't harm anyone and
  there are quite a few programs that would fail because of an error
  that many agree is too strict.  Redefining something with -D seems
  like an useful feature and again quite a few applications do this.


I regret to say that the "xxx redefined" diagnostic *is* mandatory per
C99 - 6.10.3p2:

# Constraints
# ...
#  An identifier currently defined as an object-like macro shall not be
redefined by another
#  #define preprocessing directive unless the second definition is an
object-like macro
#  definition and the two replacement lists are identical. Likewise, an
identifier currently
#  defined as a function-like macro shall not be redefined by another #define
#  preprocessing directive unless the second definition is a
function-like macro definition
#  that has the same number and spelling of parameters, and the two
replacement lists are
#  identical.

This therefore needs to stay a pedwarn at least for the case where the
redefinition comes from a #define in the source.  It's not clear to me
whether the diagnostics you're talking about are from a redefinition
via -D on the command line.  I would be okay with suppressing the
diagnostic altogether when there are two -D's on the command line --
the standard has nothing to say about command line behavior, and we
generally make the later of two conflicting switches win.  The right
place to implement that would be warn_of_redefinition.

[ Paging the Steering Committee: we need an actual cpplib maintainer. ]

zw


re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Dan Kegel

Ganesh wrote:

I work in a company where we have been using gcc-2.95.4 (based cross
compiler) for compiling our code. Most of the code is written in c++
and makes extensive use of the stl libraries. We would not be changing
our operating system or processor architecture (so portability is not
a very good reason to give).  [Compiling with gcc-3.x is quite painful.
Why should we migrate to a newer gcc?  How would you convince someone to?]


We went through exactly this.   In our case, what convinced
everyone was "it runs our app faster" and "we can get bugs fixed".
You have to back up that first claim with benchmarks, though!
We couldn't prove newer gcc's were faster until gcc-4.1 and
until we replaced most uses of std::string with a faster variant
(sadly, gcc-4.1's STL is slower than gcc-2.95.3's in some ways;
maybe gcc-4.3 will fix that?).

The transition is long and hard.  You will probably need to port
key portions of your codebase yourself to get your benchmarks to run.

Having an automated nightly build, and automatically sending
out emails to people who check in things that don't compile with
the newer gcc is important, otherwise it'll be hard to get your
codebase to a clean enough state for an orderly switchover.

http://kegel.com/gcc/gcc4.html has some tips for people dealing
with the many syntax errors.

I wouldn't recommend moving to gcc-3.x, really.  It turns out
not to save too much effort in the end...
- Dan

Fun footnote: at the gcc summit in 2004, I mentioned I was going
to migrate a large codebase to gcc-3.x, and people
said it was refreshing to see such optimism and bravery :-)
It turned out to be a lot more work than I bargained for!

--
Wine for Windows ISVs: http://kegel.com/wine/isv


Re: error: "no newline at end of file"

2007-04-01 Thread Robert Dewar

Zack Weinberg wrote:

Martin Michlmayr wrote:
...

- The "no newline" and "xxx redfined" pedwarnings should be converted
  into normal warnings.  Rationale: no newline doesn't harm anyone and
  there are quite a few programs that would fail because of an error
  that many agree is too strict.  Redefining something with -D seems
  like an useful feature and again quite a few applications do this.


I regret to say that the "xxx redefined" diagnostic *is* mandatory per
C99 - 6.10.3p2:


How can the C99 standard have anything to say about the meaning of -D?


Re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Robert Dewar

Richard Guenther wrote:


At least you'd have the chance that reported bugs may eventually get
fixed - with a 3.x (or even 4.0.x) release there's no chance of that unless
you are willing to pay (and find) someone to do it.


Which of course is one possibility, it is not always clear that updating
to the latest version of a compiler is best for the user. As developers
we prefer to have people using the latest version for all sorts of
reasons, and for users there are benefits, but on the other hand the
dont-fix-it-if-it-is-not-broken approach sometimes means that using
an old version that works is appropriate. AdaCore has some customers
using versions of GNAT as old as 3.14, because they have baselined
tool support for their projects. Actually in such a case AdaCore does
not try to fix problems (the whole point of the baselining is to avoid
any kind of changes to the tools), but they still offer help and
guidance for working around any problems, which can be invaluable
when using an obsolete version. Using an obsolete version with no
support at all does seem risky.

One thing to add to the benefits of moving to a more recent version
is that the warnings are improved, so you may find that doing the
work to switch to the latest version smokes out some bugs in advance.
Now that can be worth a lot in terms of invested time and effort!


VAX backend status

2007-04-01 Thread Matt Thomas

Over the past several weeks, I've revamped the VAX backend:

 - fixed various bugs
 - improved 64bit move, add, subtract code.
 - added patterns for ffs, bswap16, bswap32, sync_lock_test_and_set,  
and

   sync_lock_release
 - modified it to generate PIC code.
 - fixed the dwarf2 output so it is readonly in shared libraries.
 - moved the constraints from vax.h to constraints.md
 - moved predicates to predicates.md
 - added several peephole and peephole2 patterns

So the last major change to make the VAX backend completely modern is to
remove the need for "HAVE_cc0".  However, even instructions that modify
the CC don't always changes all the CC bits; some instructions preserve
certain bits.  I'd like to do this but currently it's above my level of
gcc expertise.

Should the above be submitted as one megapatch?  Or as a dozen or two
smaller patches?

And finally a few musings ...

I've noticed a few things in doing the above.  GCC 4.x doesn't seems to
do CSE on addresses.  Because the VAX binutils doesn't support non-local
symbols with a non-zero addend in the GOT, PIC will do a define_expand
so that (const (plus (symbol_ref) (const_int))) will be split into
separate instructions.  However, gcc doesn't seem to be able to take
advantage of that.  For instance, gcc emits:

movab rpb,%r0
movab 100(%r0),%r1
cvtwl (%r1),%r0

but the movab 100(%r0),%r1 is completely unneeded, this should have
been emitted as:

movab rpb,%r0
cvtwl 100(%r0),%r0

I could add peepholes to find these and fix them but it would be nice
if the optimizer could do that for me.

Another issue is that gcc has become "stupider" when it comes using
indexed addressing.  For example:

static struct { void (*func)(void *); void *arg; int inuse; } keys[64];

int nextkey;

int
setkey(void (*func)(void *), void *arg)
{
int i;
for (i = nextkey; i < 64; i++) {
if (!keys[i].inuse)
goto out;
}

emits:

movl nextkey,%r3
cmpl %r3,$63
jgtr .L38
mull3 %r3,$12,%r0
movab keys+8[%r0],%r0
tstl (%r0)

The last 3 instructions should have been:

mull3 %r3,$3,%r0
tstl keys+8[%r0]




Re: How can I get VRP information for an RTX?

2007-04-01 Thread Andrew Pinski

On 4/1/07, David Daney <[EMAIL PROTECTED]> wrote:

I am looking at how the MIPS backend handles division.  For the compiler
configuration in question (mipsel-linux) division operations trap on
division by zero.  This is handled in mips_output_division in mips.c
where we unconditionally emit a conditional trap.


Why does the back-end emit a trap for divide by zero if the hardware
does not handle it?  Division by zero is undefined by the C/C++
standard so maybe you should just add an option to the MIPS back-end
not to emit the trap.

I Know on PPC, we don't emit a trap for division by zero and the
hardware does not trap either.

Thanks,
Andrew Pinski


Re: How can I get VRP information for an RTX?

2007-04-01 Thread David Daney

Andrew Pinski wrote:

On 4/1/07, David Daney <[EMAIL PROTECTED]> wrote:

I am looking at how the MIPS backend handles division.  For the compiler
configuration in question (mipsel-linux) division operations trap on
division by zero.  This is handled in mips_output_division in mips.c
where we unconditionally emit a conditional trap.


Why does the back-end emit a trap for divide by zero if the hardware
does not handle it?  Division by zero is undefined by the C/C++
standard so maybe you should just add an option to the MIPS back-end
not to emit the trap.

There *is* such an option.

The issue is that for some things (the java front-end) we need the 
trapping behavior.  I just want to optimize it if the divisor is known 
to be non-zero.  VRP knows, but by the time we generate the code it 
seems that we have forgotten.


David Daney



Re: error: "no newline at end of file"

2007-04-01 Thread Martin Michlmayr
* Zack Weinberg <[EMAIL PROTECTED]> [2007-04-01 08:30]:
> This therefore needs to stay a pedwarn at least for the case where the
> redefinition comes from a #define in the source.  It's not clear to me
> whether the diagnostics you're talking about are from a redefinition
> via -D on the command line.

No, the cases are combinations of #define in the code and -D on the
command line.
-- 
Martin Michlmayr
http://www.cyrius.com/


Re: error: "no newline at end of file"

2007-04-01 Thread Zack Weinberg

On 4/1/07, Martin Michlmayr <[EMAIL PROTECTED]> wrote:

* Zack Weinberg <[EMAIL PROTECTED]> [2007-04-01 08:30]:
> This therefore needs to stay a pedwarn at least for the case where the
> redefinition comes from a #define in the source.  It's not clear to me
> whether the diagnostics you're talking about are from a redefinition
> via -D on the command line.

No, the cases are combinations of #define in the code and -D on the
command line.


Ugh.  That puts us in the position of having to decide whether command
line definitions "count" as previous definitions for 6.10.3p3.  I'm
inclined to think that they do, or rather, that saying they don't
involves more bending of the language than I am comfortable with.  I
could be convinced otherwise.

zw


Re: error: "no newline at end of file"

2007-04-01 Thread Zack Weinberg

Ugh.  That puts us in the position of having to decide whether command
line definitions "count" as previous definitions for 6.10.3p3.


6.10.3p*2*.

zw


Re: error: "no newline at end of file"

2007-04-01 Thread Robert Dewar

Zack Weinberg wrote:


Ugh.  That puts us in the position of having to decide whether command
line definitions "count" as previous definitions for 6.10.3p3.  I'm
inclined to think that they do, or rather, that saying they don't
involves more bending of the language than I am comfortable with.  I
could be convinced otherwise.


It's not bending the language, the standard has nothing whatever to say
about -D. I see no reason not to be completely permissive wrt -D if it
is going to make transition smoother.




Re: Extension for a throw-like C++ qualifier

2007-04-01 Thread Sergio Giro

Maybe that the option you suggest


This is best
done with something like -fstatic-exception-specifications or maybe -
Wexception-specifications -Werror.


is ideal, but it seems to me not practical at all. Every stuff using
the throw qualifier as specified in the standards will not work. If an
inline method in a standard header is
  theInlineMethod (void) throw () { throw theException(); };
this will not even compile...
  Of course, using an extension as the one I propose, some of the
standard methods must be wrapped if you want to use the new qualifier
in order to track the possible exceptions that the methods may arise,
but I think it is quite more practical to wrap such methods than to
modify the headers...
  In addition, the semantic meaning is not the same: a
throw (a,b,c)  qualifier indicates that you are able only to catch the
exceptions a, b and c, and that every other exception will be seen as
std::unexpected. Nevertheless, a
_throw(a,b,c) qualifier should indicate that the only exceptions that
can arise are a, b and c .
  With respect to this:

   If you wanted finer control,
having an inheritable attribute that says, please statically check
exception specifications for all member functions in this class and/
or on the function would be better than replicating the entire
mechanism.

  I think that a lot of information can be obtained by using existing
mechanisms. The calculation of the exceptions a module may arise under
pessimistic assumptions is rather easy, if you don't allow template
exceptions to be used in the _throw qualifier. I think that the hard
step is to put it into gcc. I will be grateful for any advise on how
to start looking add in order to implement this new feature.
   Cheers,
   Sergio

On 3/30/07, Mike Stump <[EMAIL PROTECTED]> wrote:

On Mar 30, 2007, at 11:59 AM, Sergio Giro wrote:
> The errors mentioned are compile errors,

So, you want a strict subset of the language standard.  This is best
done with something like -fstatic-exception-specifications or maybe -
Wexception-specifications -Werror.  If you wanted finer control,
having an inheritable attribute that says, please statically check
exception specifications for all member functions in this class and/
or on the function would be better than replicating the entire
mechanism.



Re: error: "no newline at end of file"

2007-04-01 Thread Zack Weinberg

On 4/1/07, Robert Dewar <[EMAIL PROTECTED]> wrote:

Zack Weinberg wrote:
It's not bending the language, the standard has nothing whatever to say
about -D. I see no reason not to be completely permissive wrt -D if it
is going to make transition smoother.


The thing is, the standard does not read "An identifier which has
previously been defined by a #define directive shall not be redefined
by another #define directive except as the same type of macro and with
the same replacement-list".  If it did, I would agree with you.
Instead, it reads "An identifier CURRENTLY DEFINED ... shall not be
redefined by a #define directive" (emphasis mine).  The intent is,
IMO, clearly to forbid (non-redundant) redefinition no matter how the
identifier acquired a macro definition in the first place - whether by
another #define, or by being built-in macros, or by -D.

zw


Re: How can I get VRP information for an RTX?

2007-04-01 Thread Andrew Pinski

On 4/1/07, David Daney <[EMAIL PROTECTED]> wrote:

The issue is that for some things (the java front-end) we need the
trapping behavior.  I just want to optimize it if the divisor is known
to be non-zero.  VRP knows, but by the time we generate the code it
seems that we have forgotten.


The java front-end as far as I know emits a functon call always for
targets that don't trap on divide by zero.  And as far as I know that
is the x86 back-end which is the only target which traps really on
divide by zero.

If the Java front-end really exposed this by inling the
exception/value, then VRP on the tree level would catch it right away.

Also I don't know what you mean by there is no option to disable the
target expansion of trap on divide by zero:
mcheck-zero-division
Target Report Mask(CHECK_ZERO_DIV)
Trap on integer divide by zero

So you can force -mno-check-zero-division for Java and don't disable
use-divide-subroutine really.  It seems wrong that the java front-end
thinks we don't have to use the divide subroutine for MIPS.  Really I
think it is wrong that the mips back-end thinks it should enable by
default trap on divide by zero.

Maybe one of the MIPS maintainers can explain why this option exists
in the first place.
As far as I can tell this has option has existed before the egcs/gcc
split.  I still say the back-end should not worry about this and
divide by zero should always be declared as undefined.

-- Pinski


Re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Joe Buck
On Sun, Apr 01, 2007 at 02:20:10PM +0200, Marcin Dalecki wrote:
> 
> Wiadomość napisana w dniu 2007-04-01, o godz13:58, przez Paul Brook:
> 
> >If you're already switching compilers, moving to an already  
> >obsolete release
> >(3.3) seems a strange choice. At this point I'd recommend skipping 3.x
> >altogether and going straight to gcc4.1/4.2.
> >
> >Many of the improvements in c++ code generation were as a result of  
> >tree-ssa,
> >you only get with 4.x.
> 
> I wouldn't recommend it. One has to adapt gradually to the patience  
> required to
> use the later compiler editions.

No, one does not have to adapt gradually.  It is no harder to switch from
2.95 to 4.1.2 than it is to switch from 2.95 to 3.3.  Either way, you'll
have to get out a C++ book, learn C++, and recode your code in actual C++.
There will be some cases where going to 3.3 will require fewer changes,
but the majority of the work is going to have to be done anyway.  And
4.1.x is much closer to what the textbooks say C++ is than 3.3 is.  Why
add to your pain?  Furthermore, if you find that your progress is hampered
by a bug in 3.3.x, no one is going to be interested in fixing it.





Re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Joe Buck

> > Many of the improvements in c++ code generation were as a result of
> > tree-ssa, you only get with 4.x.

On Sun, Apr 01, 2007 at 01:19:24PM +0100, Dave Korn wrote:
>   It is however a bigger step change, and a correspondingly bigger risk.
> There are arguments in favour of not running with the bleeding edge when what
> you want is simply a stable production compiler that will build your own
> particular codebase.

I would agree under some circumstances, and there's an argument that if
you have a codebase that works well with 2.95.3, you might as well stick
with it.  But 4.1.x has been in production use for a while now, and is
used to build entire distributions of thousands of programs.  There's a
risk of switching compilers at all, but if it is to be done, it's not
like 3.3.x is going to be more stable than 4.1.2.

Now when 4.2.0 is out, *that* will be bleeding edge.


Re: how to convince someone about migrating from gcc-2.95 to gcc-3.x

2007-04-01 Thread Chris Lattner


On Apr 1, 2007, at 10:42 PM, Joe Buck wrote:


On Sun, Apr 01, 2007 at 02:20:10PM +0200, Marcin Dalecki wrote:


Wiadomość napisana w dniu 2007-04-01, o godz13:58, przez Paul  
Brook:



If you're already switching compilers, moving to an already
obsolete release
(3.3) seems a strange choice. At this point I'd recommend  
skipping 3.x

altogether and going straight to gcc4.1/4.2.

Many of the improvements in c++ code generation were as a result of
tree-ssa,
you only get with 4.x.


I wouldn't recommend it. One has to adapt gradually to the patience
required to
use the later compiler editions.


No, one does not have to adapt gradually.  It is no harder to  
switch from
2.95 to 4.1.2 than it is to switch from 2.95 to 3.3.  Either way,  
you'll
have to get out a C++ book, learn C++, and recode your code in  
actual C++.
There will be some cases where going to 3.3 will require fewer  
changes,

but the majority of the work is going to have to be done anyway.


I believe the point being made was about compile times, not conformance.

-Chris


RE: Information regarding -fPIC support for Interix gcc

2007-04-01 Thread Mayank Kumar
Hi Murali/Everybody

1: I am keen on understanding how does the offset of L32 from 
_GLOBAL_OFFSET_TABLE_ generated ? I mean if assembly is

Movl [EMAIL PROTECTED](%ebx,%eax),%eax then how does is gets converted to mov  
0xbd14(%eax,%ebx,1),%eax. I guessed that L32 is at start of .rodata section 
which is the only section of its type. So in my opinion it should have been
Address of .got section  - address of .rodata section. But I am not getting the 
generated offset seen in objdump as the same as I calculate from the above 
method on a FreeBSD box. Can anyone help here ? On interix, the offset is being 
calculated as .got - .rodata only, so is there anything wrong in this? I am 
assuming that this is the only thing that could be going wrong.

2: Murali, Interix generated pecoff binaries have _GLOBAL_OFFSET_TABLE_ 
defined. We also have .got section/.plt section etc which are present in a pic 
compiled code, hence your patch may fix the problem in an alternate way but may 
not be the correct fix for Interix. Yes , you are right that the generated 
assembly is for a switch case and is actually a jump table.

3: Also, after further investigation, I found that Interix compiled shared 
libraries with fPIC flag only cause a crash when there is a jump table being 
created for a switch case. I compiled the same shared library which was 
crashing with ---fPIC and -fno-jump-table on 4.3 and it worked great. So are 
there any jump table experts out there who could help me investigate what is 
going wrong in there? I have attached objdump -D output and gcc -S output for 
the shared library(repro code which is basically a small dummy library).
--Ass contains objdump -D output
--lib1.s contains gcc -S output
Look at the jump table generated for switch case inside func1_1? This causes 
the crash because the control jumps to invalid location thru jmp*%eax.


Thanks
Mayank


-Original Message-
From: Murali Vemulapati [mailto:[EMAIL PROTECTED]
Sent: Saturday, March 31, 2007 9:57 PM
To: Mayank Kumar
Cc: gcc@gcc.gnu.org
Subject: Re: Information regarding -fPIC support for Interix gcc

On 3/31/07, Mayank Kumar <[EMAIL PROTECTED]> wrote:
> Further to this email, I did some more investigation, here is some more 
> detailed information about what I am doing:-
>
> 1: Compiled a shared library(tcl8.4.14) using -fPIC on gcc4.3.0 for 
> interix(was able to compile gcc 4.3.0 for interix)
> 2: The assembly generated for a particular region of code which causes a jmp 
> to an invalid location, obtained using gcc -S -fPIC is as follows:-
> movl-20(%ebp), %eax
> sall$2, %eax
> movl[EMAIL PROTECTED](%ebx,%eax), %eax
> addl%ebx, %eax
> jmp *%eax
> .section.rdata,"r"
> .balign 4
> L32:
> .long   [EMAIL PROTECTED]
>
> 3:the assembly generated while executing the binary linked with the shared 
> library
> 0x100889dc : mov0xffec(%ebp),%eax
> 0x100889df : shl$0x2,%eax
> 0x100889e2 : mov
> 0xbd14(%eax,%ebx,1),%eax
> 0x100889e9 : add%ebx,%eax
> 0x100889eb : jmp*%eax
>
> 4:Similar code compiled and run on a FreeBsd box using gcc shared library and 
> fPIC produces the following assembly:-
> 0x2810a1a8 : mov0xffec(%ebp),%eax
> 0x2810a1ab : shl$0x2,%eax
> 0x2810a1ae : mov
> 0x4f14(%eax,%ebx,1),%eax
> 0x2810a1b5 : add%ebx,%eax
> 0x2810a1b7 : jmp*%eax
>
> 5:So corresponding to
> ---movl[EMAIL PROTECTED](%ebx,%eax), %eax,
> The assembly generated on interix is
> --- mov0xbd14(%eax,%ebx,1),%eax
> Whereas on bsd box, it is
> --- mov0x4f14(%eax,%ebx,1),%eax
>
> Here the offset bd14 in case of interix is wrong which is causing the jmp 
> *eax to jump to a different function.
>
> Now my questions are:-
> 1: What does [EMAIL PROTECTED] mean ? Is it at offset L32 in GOTOFF table ?

It means the offset of L32 from the symbol GLOBAL_OFFSET_TABLE. It looks
like this refers to a switch table. It is  indexing into a jump table and jmping
to that label.

> 2: Shld the value of [EMAIL PROTECTED] remain same for all machines for which 
> the same library/binary is compiled?
> 3: Does the above mean that the GLOBAL_OFFSET_TABLE is not being populated 
> correctly ? If that is so, why would this be interix specific?

The Global Offset Table (GOT) is generated by the linker. But this is
specific to Elf
binaries. The PECOFF ABI does not define a GOT. Nor does it support
the relocation types R_386_GOTOFF and R_386_GOT32.

> 4: I can see these offset's with objdump -D also, so I concluded that this 
> could not be a linker or loader issue but a compiler issue only. Which part 
> of gcc code should I refer for this issue?
> 5:Lastly, should I raise a bug in gcc Bugzilla to track this and assign it to 
> myself or what is the procedure to track this ?
>
> Any other help or pointers in this regard shld be useful in investigating 
> further.
>
>