Tracking down gcc-4.0 performance regressions

2005-06-06 Thread Daniel Kegel

I recently worked with a UCLA student to boil down
a reported openssl performance regression with gcc-4.0
to a small standalone case (see http://gcc.gnu.org/PR19923).
We have a bit more followup to do there, but it seems
to have been a good use of an student's time.

So, I'm looking around for other reports of performance
regressions in gcc-4.0.  So far, the only other ones I've
heard of are those reported in http://www.coyotegulch.com/reviews/gcc4/
I'm tempted to have a student try reproducing and boiling down the POV-Ray
performance regession first.  Has anyone else already done that?
I'd hate to repeat work.

Thanks,
Dan


Re: Tracking down gcc-4.0 performance regressions

2005-06-06 Thread René Rebe
Hi,

On Monday 06 June 2005 09:01, Daniel Kegel wrote:
> I recently worked with a UCLA student to boil down
> a reported openssl performance regression with gcc-4.0
> to a small standalone case (see http://gcc.gnu.org/PR19923).
> We have a bit more followup to do there, but it seems
> to have been a good use of an student's time.
>
> So, I'm looking around for other reports of performance
> regressions in gcc-4.0.  So far, the only other ones I've
> heard of are those reported in http://www.coyotegulch.com/reviews/gcc4/
> I'm tempted to have a student try reproducing and boiling down the POV-Ray
> performance regession first.  Has anyone else already done that?
> I'd hate to repeat work.

I also have experienced some regressions:

http://exactcode.de/rene/hidden/gcc-article/2005-gcc-4.0/stat2-rt.png

I think this massive -Os regressions on C++ code as experienced in tramp3d and 
botan should be investigated. However I have not looked for filled PRs or 
more recnt snapshots of 4.0 so far ...

Yours,

-- 
René Rebe - Rubensstr. 64 - 12157 Berlin (Europe / Germany)
http://www.exactcode.de | http://www.t2-project.org
+49 (0)30  255 897 45


pgp1zHNuBBahG.pgp
Description: PGP signature


Re: Tracking down gcc-4.0 performance regressions

2005-06-06 Thread R Hill

René Rebe wrote:
I think this massive -Os regressions on C++ code as experienced in tramp3d and 
botan should be investigated. However I have not looked for filled PRs or 
more recnt snapshots of 4.0 so far ...


Oh good, so it's not just me. ;)

I opened PR21314 a while back but ended up chickening out.  I'm really 
new to this and since they couldn't reproduce it I figured the problem 
was on my end and closed the report.  I also think it may be a dupe of 
PR21529, but I don't know enough to confirm that.


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21314
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21529

--de.



Re: Ada front-end depends on signed overflow

2005-06-06 Thread Segher Boessenkool

There's also a fair amount of code whih relies on -1 ==
(int)0x.

Or is there any truly portable and efficient way to convert a sequence
of bytes (in big-endian order) to a signed integer?


Of course there is.  Assuming no padding bits:


int conv(unsigned char *c)
{
unsigned int i, u, hibit;

hibit = ~0U;
hibit ^= (hibit >> 1);

u = 0;
for (i = 0; i < sizeof u; i++)
u = (u << 8) + c[i];

if ((u & hibit) == 0U)
return u;

u -= hibit;

if ((unsigned int)-1 == (hibit | 1U))
return -(int)u;

return (int)u - (int)(~hibit) - ((unsigned int)-1 & 1U);
}


which generates


_conv:
li r2,4
li r0,0
mtctr r2
L11:
lbz r2,0(r3)
slwi r0,r0,8
addi r3,r3,1
add r0,r0,r2
bdnz L11
mr r3,r0
blr


with GCC 3.3 on Darwin, and


.conv:
li 9,4
li 0,0
li 11,0
mtctr 9
.p2align 4,,15
.L2:
lbzx 9,3,11
slwi 0,0,8
nop
nop
addi 11,11,1
add 0,0,9
nop
nop
rldicl 0,0,0,32
nop
nop
nop
nop
bdnz .L2
extsw 3,0
nop
nop
nop
nop
blr


with a GCC-4.1.0 snapshot on powerpc64-unknown-linux-gnu (lots of
inefficiencies here, but nothing to do with the conversion itself).

Sorry, I couldn't test it on a ones' complement or sign-magnitude
machine -- just trust me it works (or embarrass me in public, if
a bug sneaked in while converting this to C) ;-)


Segher



Re: Ada front-end depends on signed overflow

2005-06-06 Thread Eric Botcazou
> Once again, have you actually examined how awtul the code we
> generate now is?

Yes, I have.  Indeed not pretty, but suppose that we managed to cut the 
overhead in half, would that make -gnato really more attractive?

> Well of course that's just a plain bug, should be addressed as such.
> Obviously no one is using -ftrapv, so it will expose lots of bugs
> I would guess.

Yes, that's my impression too.

> Clearly the status quo is entirely unacceptable, so what's your
> recommendation of how we generate at least vaguely respectable
> code for overflow checking

Tough question, but I'm almost certain of something: since we need to recover 
from overflow checks, we have to expose them to the EH machinery, which 
nowadays means in the GENERIC tree IL.

From that, we have 2 alternatives: synchronous or asynchronous exceptions.
The current implementation is synchronous, as we explicitly raise exceptions 
in the code by calling a routine.  I guess you're pushing for asynchronous 
exceptions since this is probably the only efficient approach, i.e. we rely 
on the hardware to trap and try to recover from that.

If so, currently tree_could_trap_p will return true for all arithmetic 
operations whose type is TYPE_TRAP_SIGNED, with

#define TYPE_TRAP_SIGNED(NODE) \
  (flag_trapv && ! TYPE_UNSIGNED (NODE))

That seems far too broad.  We could instead flag expressions on a case-by-case 
basis, but I guess that would put too much burden on the tree optimizers.  So 
a compromise could be to use a specific bit for TYPE_TRAP_SIGNED and instruct 
Gigi to use the TYPE_TRAP_SIGNED-variant of a given type, when overflow 
checks are requested for an operation.

Then we would need to implement the overflow-aware instruction patterns for 
every architecture.  AFAICS only Alpha has (some of) them.  And fix the RTL 
optimizers in the process.

-- 
Eric Botcazou


Re: Proposed obsoletions

2005-06-06 Thread Nathanael Nerode
Mark Mitchell wrote:
> Daniel Jacobowitz wrote:
> 
>> On Sun, Jun 05, 2005 at 12:41:43PM -0400, Nathanael Nerode wrote:
>>
>>> * mips-wrs-windiss
>>> * powerpc-wrs-windiss
>>>  I don't think these were supposed to be in the FSF tree at all, were
>>> they?
>>
>>
>>
>> This question belongs more in this thread than in the fixproto one so
>> I'll reask it: Why do you think this?
I seem to remember asking about this some years ago, and finding out
that its existence was not documented anywhere public, which it still
isn't.  It's also odd that a VxWorks simulation environment is
sufficiently different from VxWorks that it needs a different configuration.

> Putting it more strongly, I think these should stay.
OK, I stand corrected.

> WindISS is a Wind River simulation environment (including C library),
> and is still available; Wind River ships WindISS with some of its
> development platforms.
OK.  :-)  Your job to keep it running.  Care to list yourself in
MAINTAINERS?

> I know that we've been promising pdated VxWorks configurations for a
> long time, and it's reasonable to wonder if we're serious.  But, we are;
> we've been pushing out the VxWorks 6.x binutils bits actively of late,
> and as soon as those are in, we'll do GCC.
> 
> We want the binutils bits to go in first, so that we can test the GCC
> bits with the actual FSF binutils bits.  Our current internal versions
> are based on GCC 3.3.2 and we have some ugly binutils hacks that are
> being cleaned up as we push out to the FSF.


Re: Follow up on simulators, documentation, etc.

2005-06-06 Thread Richard Sandiford
Thanks for the summary.  It sounds from your message, and particularly
the quote from RMS, that we should be accepting the patches unless we
have a particular reason not to trust MIPS to do what they said they'd
do.  I certainly have no reason not to trust MIPS, so I guess that
means the patches can go in once ready.  Eric, do you agree?

Richard


recommend use of gperf version 3

2005-06-06 Thread Bruno Haible
Hi,

The files cp/cfns.gperf and java/keyword.gperf are - as distributed -
processed by gperf-2.7.2 or with particular options. The use of gperf-3.0.1
(released in 2003) can create smaller and faster hash tables, with less
command line options:

* cp/cfns.gperf: If you drop the options "-k '1-6,$' -j1 -D" and add
  instead "-m 10",
  - the hash function accesses 4 bytes of the input string (instead of 7),
  - the hash table size drops to 317 (instead of 391).

* java/keyword.gperf: If you drop the options "-j1 -i 1 -g -o -k'1,4,$'" and
  add instead "-m 10",
  - the hash function accesses 2 bytes of the input string (instead of 3),
  - the hash table size drops to 53 (instead of 79).

Find attached a patch that does this.

cp/ChangeLog:
2005-06-05  Bruno Haible  <[EMAIL PROTECTED]>

* Make-lang.in ($(srcdir)/cp/cfns.h): Use gperf option -m 10.
Bail out if gperf is too old.

java/ChangeLog:
2005-06-05  Bruno Haible  <[EMAIL PROTECTED]>

* Make-lang.in ($(srcdir)/java/keyword.h): Use gperf option -m 10.

*** gcc-4.0.0/gcc/cp/Make-lang.in.bak   Tue Jan 18 12:45:31 2005
--- gcc-4.0.0/gcc/cp/Make-lang.in   Sun Jun  5 17:26:28 2005
***
*** 97,104 
  
  # Special build rules.
  $(srcdir)/cp/cfns.h: $(srcdir)/cp/cfns.gperf
!   gperf -o -C -E -k '1-6,$$' -j1 -D -N 'libc_name_p' -L ANSI-C \
!   $(srcdir)/cp/cfns.gperf > $(srcdir)/cp/cfns.h
  
  gtype-cp.h gt-cp-call.h gt-cp-decl.h gt-cp-decl2.h : s-gtype; @true
  gt-cp-pt.h gt-cp-repo.h gt-cp-parser.h gt-cp-method.h : s-gtype; @true
--- 97,108 
  
  # Special build rules.
  $(srcdir)/cp/cfns.h: $(srcdir)/cp/cfns.gperf
!   gperf -o -C -E -N 'libc_name_p' -L ANSI-C -m 10 \
!   $(srcdir)/cp/cfns.gperf > c.h || { \
!   echo "Please update gperf from ftp://ftp.gnu.org/pub/gnu/gperf/"; >&2; \
!   rm -f c.h; \
!   exit 1; } ; \
!   mv -f c.h $(srcdir)/cp/cfns.h
  
  gtype-cp.h gt-cp-call.h gt-cp-decl.h gt-cp-decl2.h : s-gtype; @true
  gt-cp-pt.h gt-cp-repo.h gt-cp-parser.h gt-cp-method.h : s-gtype; @true
*** gcc-4.0.0/gcc/java/Make-lang.in.bak Sat Mar 12 03:16:28 2005
--- gcc-4.0.0/gcc/java/Make-lang.in Sun Jun  5 17:24:01 2005
***
*** 87,93 
  
  $(srcdir)/java/keyword.h: $(srcdir)/java/keyword.gperf
(cd $(srcdir)/java || exit 1; \
!   gperf -L ANSI-C -C -F ', 0' -p -t -j1 -i 1 -g -o -N java_keyword 
-k1,4,$$ \
keyword.gperf > k.h || { \
echo "Please update gperf from ftp://ftp.gnu.org/pub/gnu/gperf/"; >&2; \
rm -f k.h; \
--- 87,93 
  
  $(srcdir)/java/keyword.h: $(srcdir)/java/keyword.gperf
(cd $(srcdir)/java || exit 1; \
!   gperf -L ANSI-C -C -F ', 0' -p -t -N java_keyword -m 10 \
keyword.gperf > k.h || { \
echo "Please update gperf from ftp://ftp.gnu.org/pub/gnu/gperf/"; >&2; \
rm -f k.h; \



Re: Ada front-end depends on signed overflow

2005-06-06 Thread Richard Guenther
On 6/6/05, Segher Boessenkool <[EMAIL PROTECTED]> wrote:
> > There's also a fair amount of code whih relies on -1 ==
> > (int)0x.
> >
> > Or is there any truly portable and efficient way to convert a sequence
> > of bytes (in big-endian order) to a signed integer?
> 
> Of course there is.  Assuming no padding bits:

[snip complicated stuff]

Better use a union for the (final) conversion, i.e

int conv(unsigned char *c)
{
unsigned int i;
union {
unsigned int u;
int i;
} u;

u.u = 0;
for (i = 0; i < sizeof u; i++)
  u.u = (u.u << 8) + c[i];

return u.i;
}

or even (if you can determine native byte-order and size at compile time)

int conv(unsigned char *c)
{
   union {
  unsigned char c[4];
  int i;
   } x;
   int i;
   for (int i=0; i<4; ++i)
  x.c[3-i] = c[i];
   return x.i;
}

which generates only slightly worse code than above.

Richard.


Re: Tracking down gcc-4.0 performance regressions

2005-06-06 Thread Georg Bauhaus

Daniel Kegel wrote:


So, I'm looking around for other reports of performance
regressions in gcc-4.0. 


I came across this one:

int foo(int a, int b)
{
 return a + b;
}

int bar()
{
   int x = 0, y = 10;
   int c;

   for (c=0; c < 123123123 && x > -1; ++c, --y)
   x = foo(c, y);
   return x;
}

int main()
{
   return bar();
}

The for loop is translated rather differently
by GCC 3.4.4 and GCC 4.0.1(pre) on i686 GNU/Linux.
Speed ratio around 85%.

Georg 





Re: Ada front-end depends on signed overflow

2005-06-06 Thread Segher Boessenkool

Better use a union for the (final) conversion, i.e

int conv(unsigned char *c)
{
unsigned int i;
union {
unsigned int u;
int i;
} u;

u.u = 0;
for (i = 0; i < sizeof u; i++)
  u.u = (u.u << 8) + c[i];

return u.i;
}


This is not portable, though; accessing a union member other than
the member last stored into is unspecified behaviour (see J.1 and
6.2.6.1).

This is allowed (and defined behaviour) as a GCC extension, though.


Segher



Re: Proposed obsoletions

2005-06-06 Thread Nathanael Nerode
Jan-Benedict Glaw wrote:
> On Sun, 2005-06-05 12:41:43 -0400, Nathanael Nerode <[EMAIL PROTECTED]> wrote:
> 
> 
>>* vax-*-bsd*
>>* vax-*-sysv*
>>  If anyone is still using these, GCC probably doesn't run already.  I
>>  certainly haven't seen any test results.  Correct me if I'm wrong!
>>  And after some staring, I think these are bad models for new ports.
> 
> 
> vax-*-sysv* is probably dead. NetBSD using a.out most probably, too.
Actually, that's BSD 4.3/4.4 there. (vax-*-bsd*)

> Though, vax-netbsdelf may be fine up. At least, I can get vax-linux (or
> vax-elf) compiler (only adding some 5 lines for configury) that works
> and produces a correctly working Linux kernel image (see eg.
> http://www.pergamentum.com/pipermail/linux-vax/2005-May/02.html).
> So VAXens shape isn't most probably all that bad right now.
Yes, I absolutely think vax-netbsdelf should be kept; we know that it works.



Re: Ada front-end depends on signed overflow

2005-06-06 Thread Paul Schlie
> From: Andrew Pinski <[EMAIL PROTECTED]>
>>> No they should be using -ftrapv instead which traps on overflow and then
>>> make sure they are not trapping when testing.
>> 
>> - why? what language or who's code/target ever expects such a behavior?
> Everyone's who writes C/C++ should know that overflow of signed is undefined.
> 
> Now in Java it is defined, which is the reason why -fwrapv exists in the
> place since GCC has a "Java" compiler.
> 
> I think you need to go back in the archives and read the disscusions about
> when -fwrapv was added and see why it is not turned on by default for C.
> http://gcc.gnu.org/ml/gcc-patches/2003-05/msg00850.html
> http://gcc.gnu.org/ml/gcc-patches/2003-03/msg02126.html
> http://gcc.gnu.org/ml/gcc-patches/2003-03/msg01727.html

Thank again, upon fully reviewing the threads I still conclude:

- C/C++ defines integer overflow as undefined because it's a target
  specific behavior, just as dereferencing a NULL is (although a large
  majority of targets factually do wrap overflow, and don't terminally
  trap NULL dereferences; so GCC's got it backwards in both cases).

- So technically as such semantics are undefined, attempting to track
  and identify such ambiguities is helpful; however the compiler should
  always optimize based on the true semantics of the target, which is
  what the undefined semantics truly enable (as pretending a target's
  semantics are different than the optimization assumptions, or forcing
  post-fact run-time trapping semantics, are both useless and potentially
  worse, inefficient and/or erroneous otherwise).





Re: Tracking down gcc-4.0 performance regressions

2005-06-06 Thread Richard Guenther
On 6/6/05, Georg Bauhaus <[EMAIL PROTECTED]> wrote:
> Daniel Kegel wrote:
> 
> > So, I'm looking around for other reports of performance
> > regressions in gcc-4.0.
> 
> I came across this one:
> 
> int foo(int a, int b)
> {
>   return a + b;
> }
> 
> int bar()
> {
> int x = 0, y = 10;
> int c;
> 
> for (c=0; c < 123123123 && x > -1; ++c, --y)
> x = foo(c, y);
> return x;
> }
> 
> int main()
> {
> return bar();
> }

Interestingly for mainline with -O3 we transform main into

main ()
{
  int a;
  int x;

:
  a = 0;

:;
  x = a + (int) (10 - (unsigned int) a);
  a = a + 1;
  if (a <= 123123122 && x > -1) goto ; else goto ;

:;
  return 10;

}

which the RTL loop optimizer can turn into

.L11:
incl%eax
cmpl$123123122, %eax
jle .L11

4.0 is similar - there's one extra increment in the loop, while 3.4 is
considerably worse:

.L18:
leal(%ecx,%ebx), %esi
movl%esi, %eax
incl%ecx
decl%ebx
notl%eax
cmpl$123123122, %ecx
setle   %dl
shrl$31, %eax
testl   %edx, %eax
jne .L18

So I'd not call it a performance regression ;)

Richard.


Re: recommend use of gperf version 3

2005-06-06 Thread Joseph S. Myers
On Mon, 6 Jun 2005, Bruno Haible wrote:

> The files cp/cfns.gperf and java/keyword.gperf are - as distributed -
> processed by gperf-2.7.2 or with particular options. The use of gperf-3.0.1
> (released in 2003) can create smaller and faster hash tables, with less
> command line options:

If the required version of any tool is changed then the documentation of 
that version in install.texi needs to be updated accordingly.

The generated files in CVS will also need to be regenerated on commit.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


increase in code size with gcc3.2

2005-06-06 Thread Milind Katikar
Hello,

I was using gcc 2.9 (host - i386-pc-cygwin, target –
sparclet-aout). Recently I have started using gcc 3.2
(same host and target) primarily to ge the benefit of
size reduction optimizations in gcc. However I
observed increase in size for many applications when
compiled with gcc3.2. All switches passed to gcc when
compiling with 2.9 and 3.2 are same.
Below are some observations after analyzing assembly
output when –Os is passed to both 2.9 and 3.2 –
1) There is always a ‘nop’ instruction before ‘ret’
instruction in 3.2 output.
e.g.

L72:
nop
ret

2) CSE handling in 2.9 and 3.2 is different. It is
more effective in 2.9

e.g
In my program one of the function uses a one structure
variable heavily. In 2.9 the structure base address is
treated as cse -
sethi   %hi(_sInitBootConfig), %l4

while 3.2 does not.

3) Handling of ‘if’ statements is different in 3.2
than 2.9. In 3.2 code fragment which will be executed
when predicate evaluates to true is located some where
else. This increases one jump instruction and
duplication of one delay slot instruction(useful or
‘nop’).

e.g 
source statement:

if ( GS_TRUE == sInitBootConfig.u32PerformPOST)
fieldBoot_DoPOST();

code generated by 3.2:
sethi   %hi(_sInitBootConfig), %o0
or  %o0, %lo(_sInitBootConfig), %l0
ld  [%l0+188], %o0  
cmp %o0, 1
mov 0, %l4
mov 0, %l1
be  L42 

Is there any way to reduce the output size?

Your help will be very much appreciated.

Regards,

Milind




__ 
Discover Yahoo! 
Get on-the-go sports scores, stock quotes, news and more. Check it out! 
http://discover.yahoo.com/mobile.html


Re: Ada front-end depends on signed overflow

2005-06-06 Thread Richard Guenther
On 6/6/05, Segher Boessenkool <[EMAIL PROTECTED]> wrote:
> > Better use a union for the (final) conversion, i.e
> >
> > int conv(unsigned char *c)
> > {
> > unsigned int i;
> > union {
> > unsigned int u;
> > int i;
> > } u;
> >
> > u.u = 0;
> > for (i = 0; i < sizeof u; i++)
> >   u.u = (u.u << 8) + c[i];
> >
> > return u.i;
> > }
> 
> This is not portable, though; accessing a union member other than
> the member last stored into is unspecified behaviour (see J.1 and
> 6.2.6.1).
> 
> This is allowed (and defined behaviour) as a GCC extension, though.

I guess this invokes well-defined behavior on any sane implementation,
as otherwise treatment of a pointer to such union would be different to
that of a pointer to a member of such union.  Aka you would get undefined
behavior once you read bytes from a file and access them as different
types.  Also this technique is cited to circumvent type aliasing issues
for f.i. doing bit-twiddling of floats on its integer representation.

But I guess following the standard, you are right :(

Richard.


Re: Ada front-end depends on signed overflow

2005-06-06 Thread Robert Dewar

Paul Schlie wrote:


- So technically as such semantics are undefined, attempting to track
  and identify such ambiguities is helpful; however the compiler should
  always optimize based on the true semantics of the target, which is
  what the undefined semantics truly enable (as pretending a target's
  semantics are different than the optimization assumptions, or forcing
  post-fact run-time trapping semantics, are both useless and potentially
  worse, inefficient and/or erroneous otherwise).


The first part of this is contentious, but arguable certainly (what is
useful behavior). There is certainly no requirement that the semantics
should match that of the target, especially since that's ill-defined
anyway (for targets that have many different kinds of arithemetic
instructions).

The second part is wrong, it is clear that there are cases where
the quality of code can be improved by really taking advantage of the
undefinedness of integer overflow.








Re: Tracking down gcc-4.0 performance regressions

2005-06-06 Thread Scott Robert Ladd
Daniel Kegel wrote:
> So, I'm looking around for other reports of performance
> regressions in gcc-4.0.  So far, the only other ones I've
> heard of are those reported in http://www.coyotegulch.com/reviews/gcc4/
> I'm tempted to have a student try reproducing and boiling down the POV-Ray
> performance regession first.  Has anyone else already done that?
> I'd hate to repeat work.

I've found a couple of other performance regressions on various
applications. Acovea is very good at narrowing the cause of regressions
to specific GCC options.

Should reports be made against 4.0, or against 4.1?

Also, I'm not certain how to classify "options that don't work as
expected" -- for example, -floop-optimize2, which is a pessimisim on
many tests with 4.0.0.

..Scott




Re: recommend use of gperf version 3

2005-06-06 Thread Bruno Haible
Joseph S. Myers wrote:
> If the required version of any tool is changed then the documentation of
> that version in install.texi needs to be updated accordingly.

Here is an updated patch.

> The generated files in CVS will also need to be regenerated on commit.

Yes. The one who commits it for me will have to do
  $ rm -f cp/cfns.h java/keyword.h
and rebuild and commit these files.


ChangeLog:
2005-06-06  Bruno Haible  <[EMAIL PROTECTED]>

* doc/install.texi: Mention requirement for gperf-3.0.1.

cp/ChangeLog:
2005-06-05  Bruno Haible  <[EMAIL PROTECTED]>

* Make-lang.in ($(srcdir)/cp/cfns.h): Use gperf option -m 10.
Bail out if gperf is too old.

java/ChangeLog:
2005-06-05  Bruno Haible  <[EMAIL PROTECTED]>

* Make-lang.in ($(srcdir)/java/keyword.h): Use gperf option -m 10.

*** gcc-4.0.0/gcc/cp/Make-lang.in.bak   Tue Jan 18 12:45:31 2005
--- gcc-4.0.0/gcc/cp/Make-lang.in   Sun Jun  5 17:26:28 2005
***
*** 97,104 
  
  # Special build rules.
  $(srcdir)/cp/cfns.h: $(srcdir)/cp/cfns.gperf
!   gperf -o -C -E -k '1-6,$$' -j1 -D -N 'libc_name_p' -L ANSI-C \
!   $(srcdir)/cp/cfns.gperf > $(srcdir)/cp/cfns.h
  
  gtype-cp.h gt-cp-call.h gt-cp-decl.h gt-cp-decl2.h : s-gtype; @true
  gt-cp-pt.h gt-cp-repo.h gt-cp-parser.h gt-cp-method.h : s-gtype; @true
--- 97,108 
  
  # Special build rules.
  $(srcdir)/cp/cfns.h: $(srcdir)/cp/cfns.gperf
!   gperf -o -C -E -N 'libc_name_p' -L ANSI-C -m 10 \
!   $(srcdir)/cp/cfns.gperf > c.h || { \
!   echo "Please update gperf from ftp://ftp.gnu.org/pub/gnu/gperf/"; >&2; \
!   rm -f c.h; \
!   exit 1; } ; \
!   mv -f c.h $(srcdir)/cp/cfns.h
  
  gtype-cp.h gt-cp-call.h gt-cp-decl.h gt-cp-decl2.h : s-gtype; @true
  gt-cp-pt.h gt-cp-repo.h gt-cp-parser.h gt-cp-method.h : s-gtype; @true
*** gcc-4.0.0/gcc/java/Make-lang.in.bak Sat Mar 12 03:16:28 2005
--- gcc-4.0.0/gcc/java/Make-lang.in Sun Jun  5 17:24:01 2005
***
*** 87,93 
  
  $(srcdir)/java/keyword.h: $(srcdir)/java/keyword.gperf
(cd $(srcdir)/java || exit 1; \
!   gperf -L ANSI-C -C -F ', 0' -p -t -j1 -i 1 -g -o -N java_keyword 
-k1,4,$$ \
keyword.gperf > k.h || { \
echo "Please update gperf from ftp://ftp.gnu.org/pub/gnu/gperf/"; >&2; \
rm -f k.h; \
--- 87,93 
  
  $(srcdir)/java/keyword.h: $(srcdir)/java/keyword.gperf
(cd $(srcdir)/java || exit 1; \
!   gperf -L ANSI-C -C -F ', 0' -p -t -N java_keyword -m 10 \
keyword.gperf > k.h || { \
echo "Please update gperf from ftp://ftp.gnu.org/pub/gnu/gperf/"; >&2; \
rm -f k.h; \


*** gcc-4.0.0/gcc/doc/install.texi.bak  2005-04-20 15:41:31.0 +0200
--- gcc-4.0.0/gcc/doc/install.texi  2005-06-06 14:59:04.0 +0200
***
*** 332,338 
  
  Needed to regenerate @file{gcc.pot}.
  
! @item gperf version 2.7.2 (or later)
  
  Necessary when modifying @command{gperf} input files, e.g.@:
  @file{gcc/cp/cfns.gperf} to regenerate its associated header file, e.g.@:
--- 332,338 
  
  Needed to regenerate @file{gcc.pot}.
  
! @item gperf version 3.0.1 (or later)
  
  Necessary when modifying @command{gperf} input files, e.g.@:
  @file{gcc/cp/cfns.gperf} to regenerate its associated header file, e.g.@:



Re: increase in code size with gcc3.2

2005-06-06 Thread James A. Morrison

Milind Katikar <[EMAIL PROTECTED]> writes:

> Hello,
> 
> I was using gcc 2.9 (host - i386-pc-cygwin, target √
> sparclet-aout). Recently I have started using gcc 3.2
> (same host and target) primarily to ge the benefit of
> size reduction optimizations in gcc. However I
> observed increase in size for many applications when
> compiled with gcc3.2. All switches passed to gcc when
> compiling with 2.9 and 3.2 are same.

 You didn't mention what those switches are.  Also, I gcc 3.2 is not longer
maintained, so you should try GCC 3.4 or preferably GCC 4.0.


-- 
Thanks,
Jim

http://www.csclub.uwaterloo.ca/~ja2morri/
http://phython.blogspot.com
http://open.nit.ca/wiki/?page=jim


Re: Killing fixproto (possible target obsoletion)

2005-06-06 Thread Joel Sherrill <[EMAIL PROTECTED]>

E. Weddington wrote:

Nathanael Nerode wrote:


Propose to stop using fixproto immediately:

avr-*-*
 



I'm not even sure exactly what fixproto is supposed to do, but I 
*highly* doubt that it is needed for the AVR target. The AVR target is 
an embedded processor that uses it's own C library, avr-libc:


So there aren't any "old system headers" around. The RTEMS target for 
the AVR *may* use newlib, but I don't what the status of that is. Ralf 
or Joel would have to chime in on this aspect.
The AVR target no longer uses fixincludes anyway. If there are any 
problems with the headers, we should be able fix them directly in avr-libc.


Ralf is on vacation for a few weeks.

avr-rtems is using newlib.  *-rtems uses newlib so I assume it would not
need anything adjusted.


HTH
Eric



--
Joel Sherrill, Ph.D. Director of Research & Development
[EMAIL PROTECTED] On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
   Support Available (256) 722-9985



Re: Proposed obsoletions

2005-06-06 Thread Mark Mitchell

Nathanael Nerode wrote:


I seem to remember asking about this some years ago, and finding out
that its existence was not documented anywhere public, which it still
isn't.  It's also odd that a VxWorks simulation environment is
sufficiently different from VxWorks that it needs a different configuration.


It's not a VxWorks simulation environment; it's a Wind River simulation 
environment.  It's an instruction set simulator, with some additional 
capabilities.



WindISS is a Wind River simulation environment (including C library),
and is still available; Wind River ships WindISS with some of its
development platforms.


OK.  :-)  Your job to keep it running.  Care to list yourself in
MAINTAINERS?


I think I personally would be a suboptimal choice, though better than 
nothing.  But, yes, someone or someones at CodeSourcery would definitely 
be a good choice.  Dan Jacobowitz and/or Nathan Sidwell and/or Phil 
Edwards would be good choices.


--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Follow up on simulators, documentation, etc.

2005-06-06 Thread Mark Mitchell

Richard Sandiford wrote:

Thanks for the summary.  It sounds from your message, and particularly
the quote from RMS, that we should be accepting the patches unless we
have a particular reason not to trust MIPS to do what they said they'd
do.


I'm hesitant to color it too strongly, in that I had a strong opinion up 
front, and I really do think this ought to be up to you, but if that's 
the conclusion you draw, that's certainly fine.  You should certainly 
feel free to ask MIPS for more information, if you need that to help 
judge the contribution.


Thanks,

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Proposed obsoletions

2005-06-06 Thread Paul Koning
> "Nathanael" == Nathanael Nerode <[EMAIL PROTECTED]> writes:

 Nathanael> * pdp11-*-* (generic only) Useless generic.

I believe this one generates DEC (as opposed to BSD) calling
conventions, so I'd rather keep it around.  It also generates .s files
that can (modulo a few bugfixes I need to get in) be assembled by gas.

 paul




Re: Ada front-end depends on signed overflow

2005-06-06 Thread Paul Schlie
> From: Robert Dewar <[EMAIL PROTECTED]>
> Paul Schlie wrote:
> 
>> - So technically as such semantics are undefined, attempting to track
>>   and identify such ambiguities is helpful; however the compiler should
>>   always optimize based on the true semantics of the target, which is
>>   what the undefined semantics truly enable (as pretending a target's
>>   semantics are different than the optimization assumptions, or forcing
>>   post-fact run-time trapping semantics, are both useless and potentially
>>   worse, inefficient and/or erroneous otherwise).
> 
> The first part of this is contentious, but arguable certainly (what is
> useful behavior). There is certainly no requirement that the semantics
> should match that of the target, especially since that's ill-defined
> anyway (for targets that have many different kinds of arithemetic
> instructions).

- I don't mean to contest the standard which specifies the behavior is
  undefined (regardless of how useless I perceive that to be), but merely
  observe that in fact as most targets do implement 2's complement modulo
  2^N integer arithmetic, and given that overflow behavior is undefined,
  it makes makes no sense to presume otherwise (as such a behavior is both
  fully compliant and factually typical of most, if not near all, targets).

> The second part is wrong, it is clear that there are cases where
> the quality of code can be improved by really taking advantage of the
> undefinedness of integer overflow.

- As above; but to whom is it useful to compute an undefined result
  more efficiently, especially if the premise of an optimization is not
  factually consistent with the target's behavior (which will surely
  result in an incorrectly predicted, therefore likely "computationally
  ambiguous/useless" behavior)?

  Similar arguments has been given in support an undefined order of
  evaluation; which is absurd, as the specification of a semantic order
  of evaluation only constrains the evaluation of expressions which would
  otherwise be ambiguous, as expressions which are insensitive to their
  order of evaluation may always be evaluated in any order regardless of
  a specified semantic order of evaluation and yield the same result; so
  in effect, defining an order of evaluation only disambiguates expression
  evaluation, and does not constrain the optimization of otherwise
  unambiguous expressions.
 




Re: Proposed obsoletions

2005-06-06 Thread Jeffrey A Law
On Sun, 2005-06-05 at 12:41 -0400, Nathanael Nerode wrote:

> * hppa1.1-*-bsd*
I'm 99.9% sure this can go -- in fact, I just recently found out that
the previous single largest installation of PA BSD boxes recently shut
off its last PA.

jeff





Making GCC faster

2005-06-06 Thread Sam Lauber
There has been a lot of work recently on making GCC output faster code.  But
GCC isn't very fast.  On my slow 750MHz Linux box (which the PIII in it is now
R.I.P), it took a whole night to compile 3.4.3.  On my fast iBook G4 laptop,
to compile just one source file in Perl made me wait long enough for me to start
doing ``tick-tock tick-tock tick-tock ...''.  

Samuel Lauber

-- 
___
Surf the Web in a faster, safer and easier way:
Download Opera 8 at http://www.opera.com

Powered by Outblaze


Re: Follow up on simulators, documentation, etc.

2005-06-06 Thread Eric Christopher
On Mon, 2005-06-06 at 11:05 +0100, Richard Sandiford wrote:
> Thanks for the summary.  It sounds from your message, and particularly
> the quote from RMS, that we should be accepting the patches unless we
> have a particular reason not to trust MIPS to do what they said they'd
> do.  I certainly have no reason not to trust MIPS, so I guess that
> means the patches can go in once ready.  Eric, do you agree?

Provided we get a commitment for sim support and binutils support for
the instructions exists I'm OK with them going when they're ready.

-eric



Will Apple still support GCC development?

2005-06-06 Thread Samuel Smythe
It is well-known that Apple has been a significant provider of GCC 
enhancements. But it is also probably now well-known that they have opted to 
drop the PPC architecture in favor of an x86-based architecture. Will Apple 
continue to contribute to the PPC-related componentry of GCC, or will such 
contributions be phased out as the transition is made to the x86-based systems? 
In turn, will Apple be providing more x86-related contributions to GCC?

   Much obliged,
 Sam Smythe

---
For he who is a man in heart will be a man in soul.
- Sam Smythe, 2004

_
Hate Junk Email?  Ebook helps you stop it - http://BlockJunkEmail.com


Re: Will Apple still support GCC development?

2005-06-06 Thread Scott Robert Ladd
Samuel Smythe wrote:
> It is well-known that Apple has been a significant provider of GCC
> enhancements. But it is also probably now well-known that they have
> opted to drop the PPC architecture in favor of an x86-based
> architecture. Will Apple continue to contribute to the PPC-related
> componentry of GCC, or will such contributions be phased out as the
> transition is made to the x86-based systems? In turn, will Apple be
> providing more x86-related contributions to GCC?

A better question might be: Has Intel provided Apple with an OS X
version of their compiler? If so (and I think it very likely), Apple may
have little incentive for supporting GCC, given how well Intel's
compilers perform.

..Scott



Re: Will Apple still support GCC development?

2005-06-06 Thread Steven Bosscher
On Jun 06, 2005 09:26 PM, Scott Robert Ladd <[EMAIL PROTECTED]> wrote:

> Samuel Smythe wrote:
> > It is well-known that Apple has been a significant provider of GCC
> > enhancements. But it is also probably now well-known that they have
> > opted to drop the PPC architecture in favor of an x86-based
> > architecture. Will Apple continue to contribute to the PPC-related
> > componentry of GCC, or will such contributions be phased out as the
> > transition is made to the x86-based systems? In turn, will Apple be
> > providing more x86-related contributions to GCC?
> 
> A better question might be: Has Intel provided Apple with an OS X
> version of their compiler? If so (and I think it very likely), Apple may
> have little incentive for supporting GCC, given how well Intel's
> compilers perform.

Heh, it will be interesting to see what they'll do, but it would be pretty
surprising to see them drop GCC right now, unless they have an Ace up their
sleeve (i.e. a non-GCC ObjC compiler).  But it is not like the end of the
world if Apple does not support GCC any longer, and if they will, well, I
guess it _could_ be good for most of us, no? ;-)

Gr.
Steven



Re: Will Apple still support GCC development?

2005-06-06 Thread Joe Buck
On Mon, Jun 06, 2005 at 12:17:24PM -0700, Samuel Smythe wrote:
> It is well-known that Apple has been a significant provider of GCC
> enhancements. But it is also probably now well-known that they have
> opted to drop the PPC architecture in favor of an x86-based
> architecture. Will Apple continue to contribute to the PPC-related
> componentry of GCC, or will such contributions be phased out as the
> transition is made to the x86-based systems? In turn, will Apple be
> providing more x86-related contributions to GCC?

I don't think anyone on this list (even the Apple employees) has a
useful answer to that that they could give you right now.  I hope
we don't have to have a long, drawn out discussion where everyone
speculates as to what might happen.




gccbug submissions seem to be silently ignored

2005-06-06 Thread Rainer Orth
I've recently sent a couple of gcc bug reports using gccbug.  The latest
one was

  Subject: All libjava execution tests fail on IRIX 6
  Date: Mon, 6 Jun 2005 19:34:48 GMT

Unfortunately, the submissions seem to be silently ignored: I neither got
the usual confirmation and info on the assigned bug id nor any hint that
the service has been disabled/discontinued.

Could someone please check what's going on there?

Thanks.
Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University


Re: Will Apple still support GCC development?

2005-06-06 Thread Alan Lehotsky
FYI for the application my company is developing (integer and bit-field 
intensive with very little floating point),
we have found gcc to be 10-30% FASTER than icc8.0.

We were told that this was partially because icc doesn't optimize unsigned 
expressions very well
(I'm dubious that this is the cause of the slowdown from the Intel compiler, 
but I have no other data besides the 9 tests (customer
code, so it's measuring what we sell) that show gcc beats icc.


-Original Message-
From: Scott Robert Ladd <[EMAIL PROTECTED]>
Sent: Jun 6, 2005 3:26 PM
To: gcc@gcc.gnu.org
Subject: Re: Will Apple still support GCC development?

Samuel Smythe wrote:
> It is well-known that Apple has been a significant provider of GCC
> enhancements. But it is also probably now well-known that they have
> opted to drop the PPC architecture in favor of an x86-based
> architecture. Will Apple continue to contribute to the PPC-related
> componentry of GCC, or will such contributions be phased out as the
> transition is made to the x86-based systems? In turn, will Apple be
> providing more x86-related contributions to GCC?

A better question might be: Has Intel provided Apple with an OS X
version of their compiler? If so (and I think it very likely), Apple may
have little incentive for supporting GCC, given how well Intel's
compilers perform.

..Scott




Re: gccbug submissions seem to be silently ignored

2005-06-06 Thread Daniel Berlin
On Mon, 2005-06-06 at 21:45 +0200, Rainer Orth wrote:
> I've recently sent a couple of gcc bug reports using gccbug.  The latest
> one was
> 
>   Subject: All libjava execution tests fail on IRIX 6
>   Date: Mon, 6 Jun 2005 19:34:48 GMT
> 
> Unfortunately, the submissions seem to be silently ignored: I neither got
> the usual confirmation and info on the assigned bug id nor any hint that
> the service has been disabled/discontinued.
> 
> Could someone please check what's going on there?

Somehow the perl code got screwed up
Try now
> 
> Thanks.
>   Rainer
> 
> -
> Rainer Orth, Faculty of Technology, Bielefeld University



Re: gccbug submissions seem to be silently ignored

2005-06-06 Thread Rainer Orth
Daniel Berlin writes:

> Somehow the perl code got screwed up
> Try now

Works like a charm, thanks alot.

Rainer


Re: Will Apple still support GCC development?

2005-06-06 Thread Marc Espie
In article <[EMAIL PROTECTED]> you write:
>Samuel Smythe wrote:
>> It is well-known that Apple has been a significant provider of GCC
>> enhancements. But it is also probably now well-known that they have
>> opted to drop the PPC architecture in favor of an x86-based
>> architecture. Will Apple continue to contribute to the PPC-related
>> componentry of GCC, or will such contributions be phased out as the
>> transition is made to the x86-based systems? In turn, will Apple be
>> providing more x86-related contributions to GCC?
>
>A better question might be: Has Intel provided Apple with an OS X
>version of their compiler? If so (and I think it very likely), Apple may
>have little incentive for supporting GCC, given how well Intel's
>compilers perform.

Oh sure, and Intel as an Obj-C++ compiler up their sleeve... right.

Speculations, speculations. Wait and see...


ld: common symbols not allowed with MH_DYLIB output format with the -multi_module option

2005-06-06 Thread Mathieu Malaterre

Hello,

   I have a question about a valid C code. I am trying to compile the 
following code in MacOSX (*). I don't understand what the problem is ? 
Could someone please explain me what is going on ? Since I declare the 
variable with extern I should not need to pass -fno-common, right ?


Thanks for your help
Mathieu

foo.h:
extern int bar[];

foo.c:
int bar[4 * 256];

And compile lines are:
$ gcc -o foo.o   -Wall -W -fPIC-c foo.c
$ gcc -dynamiclib   -o libfoo.dylib foo.o
ld: common symbols not allowed with MH_DYLIB output format with the 
-multi_module option

foo.o definition of common _bar (size 4096)
/usr/bin/libtool: internal link edit command failed

using gcc 3.3 20030304 (Apple Computer, Inc. build 1671)



Re: Will Apple still support GCC development?

2005-06-06 Thread Toon Moene

Samuel Smythe wrote:


It is well-known that Apple has been a significant provider of GCC enhancements.
But it is also probably now well-known that they have opted to drop
the PPC architecture in favor of an x86-based architecture.
Will Apple continue to contribute to the PPC-related componentry of GCC,
or will such contributions be phased out as the transition is made to the
x86-based systems?


We don't mind.  I bought an Apple G4 PowerBook because it offered me a 
relatively cheap way to get a 32 bit, big endian machine.  The first 
thing I did after receiving it is wiping out OS X and installing a real 
operating system, i.e., Debian.


A big endian system is indispensible if you are a compiler writer, 
because little endian hardware hides too many programmer errors; At the 
previous GCC Summit even the head of the Intel compiler group agreed 
with me on this and pointed out that the Itanium can be run in big 
endian mode.



In turn, will Apple be providing more x86-related contributions to GCC?


Well, they could do all they might.  I'm just waiting for IBM coming 
forward with a Linux PowerPC64 laptop, so that I can continue to use big 
endian hardware.


--
Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/
Looking for a job: Work from home or at a customer site; HPC, (GNU) 
Fortran & C


Re: Proposed obsoletions

2005-06-06 Thread Richard Henderson
On Sun, Jun 05, 2005 at 12:41:43PM -0400, Nathanael Nerode wrote:
> * alpha*-*-unicosmk*
>   No real update since 2002.  If rth, the lone alpha maintainer, is actually
>   maintaining it, I guess it should stay; it's not in bad shape.  But does
>   it really need fixproto?

This port was done by Roman Lechtchinsky <[EMAIL PROTECTED]>; I've never
touched such a beast, and have no idea what its development environment
is like.


r~


Re: Will Apple still support GCC development?

2005-06-06 Thread Mirza Hadzic


A big endian system is indispensible if you are a compiler writer, 
because little endian hardware hides too many programmer errors


Can you show example(s) where little endian hides errors? Just curious...

Intel already handed icc + performace libs to apple, but from my experience icc 
doesn't create any faster code then gcc. Is there any *recent* benchmark that 
shows otherwise? I know that heavy math code is likely to perform better on icc 
but this is rather uninteresting to general audience. It would be interersting 
to see benchmark of programs that usually runs on most desktops/servers like 
MySQL, Apache, C++ IOStreams-heavy program, C++ STL-heavy program. If there is 
no such benchmark, I will do something along these lines to prove my 
(gcc-is-not-slower-then-icc-for-general-use) point.

http://www.listicka.cz


Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]

2005-06-06 Thread Laurent GUERBY
On Mon, 2005-05-30 at 23:10 -0400, Robert Dewar wrote:
> Toon Moene wrote:
> 
> >> But even this were fixed, many users would still complain.
> >> That's why I think that the Linux kernel should set the CPU
> >> in double-precision mode, like some other OS's (MS Windows,
> >> *BSD) -- but this is off-topic here.
> > 
> > It's not off-topic.  In fact, Jim Wilson argued this point here:
> > 
> > http://gcc.gnu.org/ml/gcc/2003-08/msg01282.html
> 
> There are good arguments on either side of this issue. If you set
> double precision mode, then you get more predictable precision
> (though range is still unpredictable), at the expense of not being
> able to make use of extended precision (there are many algorithms
> which can take very effective advantage of extended precision (e.g.
> you can use log/exp to compute x**y if you have extended precision
> but not otherwise).

Such algorithm usually require a very detailed control of what's going
on at the machine level, given current high level programming languages
that means using assembler. Also, I don't remember but I believe
user code is able to change the default when needed, so knowlegeable
users should still be able to do what's necessary (set and restore the
state), albeit may be with a loss of processing performance.

I also assume it's nearly impossible to get FP algorithms (eg: relying
on FP equality) working with the currently (broken) compilers that
operate in extended precision, but it's much easier when
FPU mode is set to round to 64 bits.

> Given that there are good arguments on both sides for what the
> default should be, I see no good argument for changing the
> default, which will cause even more confusion, since programs
> that work now will suddenly stop working.

Or that many programs that currently work on many OS
will start to work the same under Linux instead of
giving strange (and may be wrong) results.

Laurent



Re: ld: common symbols not allowed with MH_DYLIB output format with the -multi_module option

2005-06-06 Thread Sam Lauber
> Hello,
> 
> I have a question about a valid C code. I am trying to compile 
> the following code in MacOSX (*). I don't understand what the 
> problem is ? Could someone please explain me what is going on ? 
> Since I declare the variable with extern I should not need to pass 
> -fno-common, right ?
> 
> Thanks for your help
> Mathieu
> 
> foo.h:
> extern int bar[];
> 
> foo.c:
> int bar[4 * 256];
> 
> And compile lines are:
> $ gcc -o foo.o   -Wall -W -fPIC-c foo.c
> $ gcc -dynamiclib   -o libfoo.dylib foo.o
> ld: common symbols not allowed with MH_DYLIB output format with the 
> -multi_module option
> foo.o definition of common _bar (size 4096)
> /usr/bin/libtool: internal link edit command failed
> 
> using gcc 3.3 20030304 (Apple Computer, Inc. build 1671)

This is not a problem with GCC.  In fact, it is not a problem at all.  It is a
misunderstanding of the linker.  When you run

  gcc -dynamiclib -o libfoo.dylib foo.o

it really does

  libtool -dynamic -o libfoo.dylib foo.o

which becomes

  ld -dylib -o libfoo.dylib foo.o

However, -multi_module is the default, so it really is

  ld -dylib -o libfoo.dylib foo.o -multi_module

-multi_module is enabled by default, so you can get errors from the linker of
the form `libfoo.dylib(foo.o)' when there's a link problem.  

The error happens because

 a) an `extern' variable is called a `common variable'
 b) in multipule module mode, common variables are not allowed
 c) if they were allowed, it would defeat the purpose of that option: better
diagnostics

To fix it, add -Wl,-single_module to the end of the GCC command line.  
However, note that subsequent linker errors will refer to `libfoo.dylib' instead
of `libfoo.dylib(foo.o)'.  

Samuel Lauber

-- 
___
Surf the Web in a faster, safer and easier way:
Download Opera 8 at http://www.opera.com

Powered by Outblaze


Re: ld: common symbols not allowed with MH_DYLIB output format with the -multi_module option

2005-06-06 Thread Sam Lauber
> Hello,
> 
> I have a question about a valid C code. I am trying to compile 
> the following code in MacOSX (*). I don't understand what the 
> problem is ? Could someone please explain me what is going on ? 
> Since I declare the variable with extern I should not need to pass 
> -fno-common, right ?
> 
> Thanks for your help
> Mathieu
> 
> foo.h:
> extern int bar[];
> 
> foo.c:
> int bar[4 * 256];
> 
> And compile lines are:
> $ gcc -o foo.o   -Wall -W -fPIC-c foo.c
> $ gcc -dynamiclib   -o libfoo.dylib foo.o
> ld: common symbols not allowed with MH_DYLIB output format with the 
> -multi_module option
> foo.o definition of common _bar (size 4096)
> /usr/bin/libtool: internal link edit command failed
> 
> using gcc 3.3 20030304 (Apple Computer, Inc. build 1671)

This is not a problem with GCC.  In fact, it is not a problem at all.  It is a
misunderstanding of the linker.  When you run

  gcc -dynamiclib -o libfoo.dylib foo.o

it really does

  libtool -dynamic -o libfoo.dylib foo.o

which becomes

  ld -dylib -o libfoo.dylib foo.o

However, -multi_module is the default, so it really is

  ld -dylib -o libfoo.dylib foo.o -multi_module

-multi_module is enabled by default, so you can get errors from the linker of
the form `libfoo.dylib(foo.o)' when there's a link problem.  

The error happens because

 a) an `extern' variable is called a `common variable'
 b) in multipule module mode, common variables are not allowed
 c) if they were allowed, it would defeat the purpose of that option: better
diagnostics

To fix it, add -Wl,-single_module to the end of the GCC command line.  
However, note that subsequent linker errors will refer to `libfoo.dylib' instead
of `libfoo.dylib(foo.o)'.  

Samuel Lauber

-- 
___
Surf the Web in a faster, safer and easier way:
Download Opera 8 at http://www.opera.com

Powered by Outblaze


Re: ld: common symbols not allowed with MH_DYLIB output

2005-06-06 Thread Mathieu Malaterre

Sam,

   Since you seems very knowledgable why does the error desepear when I 
initialize the structure ?


int bar [ 4 * 256 ] = { 0,1,2, ... };

   I did not changed nor any compiler option, neither any declaration. 
I still cannot see the difference in between those two, since the 
declaration is exactly the same. The only difference being a default 
initialization.


Thanks again for your time,
Mathieu

On Jun 6, 2005, at 5:57 PM, Sam Lauber wrote:




Hello,

I have a question about a valid C code. I am trying to compile
the following code in MacOSX (*). I don't understand what the
problem is ? Could someone please explain me what is going on ?
Since I declare the variable with extern I should not need to pass
-fno-common, right ?

Thanks for your help
Mathieu

foo.h:
extern int bar[];

foo.c:
int bar[4 * 256];

And compile lines are:
$ gcc -o foo.o   -Wall -W -fPIC-c foo.c
$ gcc -dynamiclib   -o libfoo.dylib foo.o
ld: common symbols not allowed with MH_DYLIB output format with the
-multi_module option
foo.o definition of common _bar (size 4096)
/usr/bin/libtool: internal link edit command failed

using gcc 3.3 20030304 (Apple Computer, Inc. build 1671)


This is not a problem with GCC.  In fact, it is not a problem at all.  
It is a

misunderstanding of the linker.  When you run

  gcc -dynamiclib -o libfoo.dylib foo.o

it really does

  libtool -dynamic -o libfoo.dylib foo.o

which becomes

  ld -dylib -o libfoo.dylib foo.o

However, -multi_module is the default, so it really is

  ld -dylib -o libfoo.dylib foo.o -multi_module

-multi_module is enabled by default, so you can get errors from the 
linker of

the form `libfoo.dylib(foo.o)' when there's a link problem.

The error happens because

 a) an `extern' variable is called a `common variable'
 b) in multipule module mode, common variables are not allowed
 c) if they were allowed, it would defeat the purpose of that option: 
better

diagnostics

To fix it, add -Wl,-single_module to the end of the GCC command line.
However, note that subsequent linker errors will refer to 
`libfoo.dylib' instead

of `libfoo.dylib(foo.o)'.

Samuel Lauber

--
___
Surf the Web in a faster, safer and easier way:
Download Opera 8 at http://www.opera.com

Powered by Outblaze





Re: Will Apple still support GCC development?

2005-06-06 Thread Scott Robert Ladd
Mirza Hadzic wrote:
>> Intel already handed icc + performace libs to apple, but from my 
>> experience icc doesn't create any faster code then gcc. Is there
>> any *recent* benchmark that shows otherwise?

Define "recent".

>> I know that heavy math code is likely to perform better on icc but
>> this is rather uninteresting to general audience.

In general, the choice of compiler is unimportant for the performance of
most user-bound programs. I doubt KDE would run much faster if compiled
with ICC, for example.

However, for codecs, image processing, and other math-intensive
operations, Intel generally produces faster code, though
not always. This isn't just a matter of floating-point generation --
Intel's vectorizer gives their compiler a distinct advantage over GCC,
although I expect GCC to catch up in this regard.

Rather than speculate about Apple, it would be better to find more
funding for GCC development. I know that some people are working on this.

..Scott



Re: Ada front-end depends on signed overflow

2005-06-06 Thread Robert Dewar

Paul Schlie wrote:


  Similar arguments has been given in support an undefined order of
  evaluation; which is absurd, as the specification of a semantic order
  of evaluation only constrains the evaluation of expressions which would
  otherwise be ambiguous, as expressions which are insensitive to their
  order of evaluation may always be evaluated in any order regardless of
  a specified semantic order of evaluation and yield the same result; so
  in effect, defining an order of evaluation only disambiguates expression
  evaluation, and does not constrain the optimization of otherwise
  unambiguous expressions.


I think perhaps you have not really looked at the issues that are raised
in optimizing compilers. Let me address this misconception first, similar
considerations apply to the overflow case.

The point is that at compile time you cannot tell what expressions are
"ambiguous". I use quotes here since of course there is no ambiguity
involved:

  suppose you write  (a * f(b)) * (c * g(d))

where f(b) modifies c and g(d) modifies a. You would call this ambiguous
but in fact the semantics is undefined, so it is not ambiguous at all,
it has quite unambiguous semantics, namely undefined. The notion of
non-deterministic behavior is of course quite familiar, and what
undefined means here is that the behavior is non-deterministic from
among the set of all possible behaviors.

Now you seem to suggest that the optimizer should simply avoid
"optimizing" in such cases (where it matters). But the whole point
is that of course at compile time in general you can't tell whether
f(b) and g(d) have (horrible) side effects of modifying global
variables a and c. If we don't allow undefined behavior, the optimizer
would have to assume the worst, and refrain from optimizing the above
expression just in case some idiot had written side effects. The decision
of the standards committee is to make the side effect case undefined
PRECISELY so that the optimizer can go ahead and choose the optimal
order of evaluation without worrying about the side effect case.

Going back to the overflow case, consider:

   a = (some expression the compiler can tell is positive)
   b = (some expression the compiler can tell is positive)
   c = a + b;
   d = c / 8;

Here the compiler can do a right shift for the last statement. This
would of course be wrong in the general case (assuming c and d are int),
since the right shift does not work right if c is negative. But the
compiler is allowed to assume that a+b does not overflow and hence
the result is positive, and hence the shift is fine.

SO you see that guaranteeing twos complement wrap can hurt the quality
of the code in a case like this.

Now it is legitimate to argue about how much quality is hurt, and
whether the resulting non-determinisim is worth the effeciency hit.
But to conduct this argumnnt, you have to start off with an understanding
that there really is a trade-off and it is not clear what the decision
should be (C makes one decision, Java another, and Ada a third). The
three languages also differ in the handling of a+b

C says that this is undefined if there are side effects that cause what you
call ambiguity.

Java specifies left to right evaluation (evaluate a before b always).

Ada specifies that the result is non-deterministic, you either evaluate
a before b or b before a (not some mixture of the two), and there may
be (at most) two possible results.



Re: Will Apple still support GCC development?

2005-06-06 Thread Robert Dewar

Scott Robert Ladd wrote:


A better question might be: Has Intel provided Apple with an OS X
version of their compiler? If so (and I think it very likely), Apple may
have little incentive for supporting GCC, given how well Intel's
compilers perform.


Well that's probably jumping to a conclusion without sufficient data,
after all there are scads of people choosing to use gcc over the Intel
compilers (and I am talking about commercial situations, where cost
etc is not the issue).

Presumably Apple will take a close look at this, and indeed may provide
some useful input on the issue.



Re: Will Apple still support GCC development?

2005-06-06 Thread Robert Dewar

Toon Moene wrote:


 The first
thing I did after receiving it is wiping out OS X and installing a real 
operating system, i.e., Debian.


Is it really necessary to post flame bait like this, hopefully people
ignore this


A big endian system is indispensible if you are a compiler writer, 
because little endian hardware hides too many programmer errors; At the 
previous GCC Summit even the head of the Intel compiler group agreed 
with me on this and pointed out that the Itanium can be run in big 
endian mode.


In general I think you need to test both cases, since for sure quite
a bit of stuff does depend on endianness.

Well, they could do all they might.  I'm just waiting for IBM coming 
forward with a Linux PowerPC64 laptop, so that I can continue to use big 
endian hardware.


Suggestion, don't hold your breath!







Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]

2005-06-06 Thread Robert Dewar

Laurent GUERBY wrote:


Such algorithm usually require a very detailed control of what's going
on at the machine level, given current high level programming languages
that means using assembler.


No, that's not true, you might want to look at some of Jim Demmel's
work in this area.



Or that many programs that currently work on many OS
will start to work the same under Linux instead of
giving strange (and may be wrong) results.


But many programs that work fine on the x86 now will start
breaking.


Laurent





Re: Will Apple still support GCC development?

2005-06-06 Thread Steven Bosscher
On Tuesday 07 June 2005 01:13, Robert Dewar wrote:
> > Well, they could do all they might.  I'm just waiting for IBM coming
> > forward with a Linux PowerPC64 laptop, so that I can continue to use big
> > endian hardware.
>
> Suggestion, don't hold your breath!

He could try and join the hack-the-xbox effort :-)

Gr.
Steven



Re: Ada front-end depends on signed overflow

2005-06-06 Thread Robert Dewar

Eric Botcazou wrote:

Once again, have you actually examined how awtul the code we
generate now is?



Yes, I have.  Indeed not pretty, but suppose that we managed to cut the 
overhead in half, would that make -gnato really more attractive?


Yes, it would definitely make the difference, given the figures
we have seen.


From that, we have 2 alternatives: synchronous or asynchronous exceptions.
The current implementation is synchronous, as we explicitly raise exceptions 
in the code by calling a routine.  I guess you're pushing for asynchronous 
exceptions since this is probably the only efficient approach, i.e. we rely 
on the hardware to trap and try to recover from that.


Defintiely not, integer overflow generates async traps on very few
architectures. Certainly synchronous traps can be generated (e.g.
with into on the x86).





Value range propagation pass in 4.0.1 RC1 (or not)

2005-06-06 Thread nkavv

Hi there

does the 4.0.1 RC1 include the value range propagation (VRP) ssa-based pass
developed by Diego Novillo?

If not what is the VRP status at the CVS for the C language? Is it basically
working?

thanks in advance

Nikolaos Kavvadias


VRP in release version of GCC

2005-06-06 Thread nkavv

If you visit the following:
http://gcc.gnu.org/gcc-4.0/changes.html

a reference is found to value range propagation pass. However, $GCCHOME/gcc
directory doesn't contain the required files (e.g. tree-vrp.c).

Is this an addition for a scheduled (pre)release or i just can't find it in the
released gcc-4.0.0 ??

thanks

Nikolaos Kavvadias



Re: VRP in release version of GCC

2005-06-06 Thread Steven Bosscher
On Tuesday 07 June 2005 01:44, [EMAIL PROTECTED] wrote:
> If you visit the following:
> http://gcc.gnu.org/gcc-4.0/changes.html
>
> a reference is found to value range propagation pass. However, $GCCHOME/gcc
> directory doesn't contain the required files (e.g. tree-vrp.c).
>
> Is this an addition for a scheduled (pre)release or i just can't find it in
> the released gcc-4.0.0 ??

The VRP pass is inside tree-ssa-dom.c for GCC 4.0.
GCC 4.1 has a much more powerful VRP pass, which is not related
to the DOM pass.

Gr.
Steven


Will Apple still support signed overflow?

2005-06-06 Thread Daniel Kegel

I don't know about everybody else, but the
subject lines are starting to run together for me :-)


Re: Will Apple still support signed overflow?

2005-06-06 Thread E. Weddington

Daniel Kegel wrote:


I don't know about everybody else, but the
subject lines are starting to run together for me :-)

Agreed,  but will they also support what is wrong with Bugzilla? 
or was that GCC and floating point?


Eric
LMAO


Re: ld: common symbols not allowed with MH_DYLIB output

2005-06-06 Thread Sam Lauber
> int bar [ 4 * 256 ] = { 0,1,2, ... };
> 
> I did not changed nor any compiler option, neither any 
> declaration. I still cannot see the difference in between those 
> two, since the declaration is exactly the same. The only difference 
> being a default initialization.

There is a more subtle diffrence at work here.  According to ANSI:

  ``An external declaration for an object is a definition if it has an 
initalizer.  
*An external object declaration that does not have an initalizer, and does not
contain the extern specifier, is a tentaive definition.*  If a definition for 
an 
object appears in a translation unit, all its tentative definitions become a 
single definition with initalizer 0.''

The highlighted sentence is very subtle, but it is the entire diffrence between 
an executables `text' and `bss' sections.  In Darwin, ``tentative definition''  
is called ``common symbol''.  _When you add the initalizer, you make the
definition non-tentative_.  Got it? (If not, you should reread this paragraph.)

The moral of the story is, be sure to always read the fine print.  
(Translation: 
keep track of subtleties when doing work.)

Samuel Lauber

-- 
___
Surf the Web in a faster, safer and easier way:
Download Opera 8 at http://www.opera.com

Powered by Outblaze


Re: Will Apple still support GCC development?

2005-06-06 Thread Sam Lauber
> >> Intel already handed icc + performace libs to apple, but from my 
> >> experience icc doesn't create any faster code then gcc. Is there
> >> any *recent* benchmark that shows otherwise?
> 
> Define "recent".
> 
> >> I know that heavy math code is likely to perform better on icc but
> >> this is rather uninteresting to general audience.
> 
> In general, the choice of compiler is unimportant for the performance of
> most user-bound programs. I doubt KDE would run much faster if 
compiled
> with ICC, for example.
> 
> However, for codecs, image processing, and other math-intensive
> operations, Intel generally produces faster code, though
> not always. This isn't just a matter of floating-point generation --
> Intel's vectorizer gives their compiler a distinct advantage over GCC,
> although I expect GCC to catch up in this regard.
> 
> Rather than speculate about Apple, it would be better to find more
> funding for GCC development. I know that some people are working on 
this.
I don't think you should worry about this.  Because:

 a) Apple just started the transition to PowerPC64.  (A politically correct
statement would be that Apple just stopped the transition to PowerPC32).  If
they were still working on that (large parts of OSX still aren't 64-bit), the
programmers at Apple would probably be very annoyed, considering that
pretty much everything (exceptions: Darwin, iTunes, AppleWorks, QuickTime) 
was written for the PowerPC.  
 b) It's unlikely that _every_ Mac software company would port all of there
programs to the x86, more or less Apple themselves.  
 c) Some existing work (e.g. BootX) would have to be completly trashed or
rewritten from scratch.  
 d) Most of us don't want to waste there money on a new Mac just because
there switching to x86.  

In summary, we shouldn't worry about this, because 75% of what Apple is
planning to do is a bad idea for them.  

Samuel Lauber

-- 
___
Surf the Web in a faster, safer and easier way:
Download Opera 8 at http://www.opera.com

Powered by Outblaze


Re: Value range propagation pass in 4.0.1 RC1 (or not)

2005-06-06 Thread Diego Novillo
On Tue, Jun 07, 2005 at 02:38:26AM +0300, [EMAIL PROTECTED] wrote:

> does the 4.0.1 RC1 include the value range propagation (VRP) ssa-based pass
> developed by Diego Novillo?
> 
No.

> If not what is the VRP status at the CVS for the C language? Is it basically
> working?
> 
Essentially, yes.  It's enabled by default at -O2 and you can see
what the pass does with -fdump-tree-vrp.


Diego.


Re: Ada front-end depends on signed overflow

2005-06-06 Thread Paul Schlie
> From: Robert Dewar <[EMAIL PROTECTED]>
> Paul Schlie wrote:
> 
>>   Similar arguments has been given in support an undefined order of
>>   evaluation; which is absurd, as the specification of a semantic order
>>   of evaluation only constrains the evaluation of expressions which would
>>   otherwise be ambiguous, as expressions which are insensitive to their
>>   order of evaluation may always be evaluated in any order regardless of
>>   a specified semantic order of evaluation and yield the same result; so
>>   in effect, defining an order of evaluation only disambiguates expression
>>   evaluation, and does not constrain the optimization of otherwise
>>   unambiguous expressions.
> 
> I think perhaps you have not really looked at the issues that are raised
> in optimizing compilers. Let me address this misconception first, similar
> considerations apply to the overflow case.
> 
> The point is that at compile time you cannot tell what expressions are
> "ambiguous". I use quotes here since of course there is no ambiguity
> involved:
> 
>suppose you write  (a * f(b)) * (c * g(d))
> 
> where f(b) modifies c and g(d) modifies a. You would call this ambiguous
> but in fact the semantics is undefined, so it is not ambiguous at all,
> it has quite unambiguous semantics, namely undefined. The notion of
> non-deterministic behavior is of course quite familiar, and what
> undefined means here is that the behavior is non-deterministic from
> among the set of all possible behaviors.

- Agreed, I would classify any expression as being ambiguous if any of
  it's operand values (or side effects) were sensitive to the allowable
  order of evaluation of it's remaining operands, but not otherwise.

> Now you seem to suggest that the optimizer should simply avoid
> "optimizing" in such cases (where it matters).

- No, I simply assert that if an expression is unambiguous (assuming
  my definition above for the sake of discussion), then the compiler
  may choose to order the evaluation in any way it desires as long as
  it does not introduce an such an ambiguity by doing so.

>  But the whole point
> is that of course at compile time in general you can't tell whether
> f(b) and g(d) have (horrible) side effects of modifying global
> variables a and c. If we don't allow undefined behavior, the optimizer
> would have to assume the worst, and refrain from optimizing the above
> expression just in case some idiot had written side effects. The decision
> of the standards committee is to make the side effect case undefined
> PRECISELY so that the optimizer can go ahead and choose the optimal
> order of evaluation without worrying about the side effect case.

- I fully agree that if a complier does not maintain records of the
  program state which a function may alter or be dependant on, as
  would be required to determine if any resulting operand/side-effect
  interdependences may exist upon it's subsequent use as an operand
  within a an expression itself; then the compiler would have no choice
  but to maintain it's relative order of evaluation as hypothetically
  specified, as it may otherwise introduce an ambiguity.

  Although I believe I appreciate the relative complexity this introduces
  to both the compiler, and well as requirements imposed on "pre-compiled"
  libraries, etc., I don't believe that it justifies a language definition
  legitimizing the specification of otherwise non-deterministic programs.

> Going back to the overflow case, consider:
> 
> a = (some expression the compiler can tell is positive)
> b = (some expression the compiler can tell is positive)
> c = a + b;
> d = c / 8;
> 
> Here the compiler can do a right shift for the last statement. This
> would of course be wrong in the general case (assuming c and d are int),
> since the right shift does not work right if c is negative. But the
> compiler is allowed to assume that a+b does not overflow and hence
> the result is positive, and hence the shift is fine.
> 
> SO you see that guaranteeing twos complement wrap can hurt the quality
> of the code in a case like this.

- As you've specified the operations as distinct statements, I would argue
  that such an optimization would only be legitimate if the result were
  known to produce the same result as if the statements were evaluated in
  sequence as specified by the standard (which of course would be target
  specific). Correspondingly I would assert that:

  d = (a + b) / 8;

  would be ambiguous if the complier were able to restructure evaluation
  of expression in any way which may alter it's resulting effective result
  for a given target, As a program which has non-deterministic behavior
  doesn't seem very useful, regardless of whether or not it's "allowed" by
  a standard or not. (Although concede that some optimizations have more
  benign worst-case effects than others, and may be reasonable if explicitly
  enabled (aka unsa

Re: VRP in release version of GCC

2005-06-06 Thread Jeffrey A Law
> The VRP pass is inside tree-ssa-dom.c for GCC 4.0.
Yup.  And it's very very weak.

> GCC 4.1 has a much more powerful VRP pass, which is not related
> to the DOM pass.
Exactly.  Hopefully we'll be able to remove the DOM version before
4.1 since the new tree-vrp.c is vastly better.

jeff




Failures in tests for obj-c++....

2005-06-06 Thread Christian Joensson
I get a few failures when trying to run the obj-c++ testsuite...

See, e.g., http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg00375.html

This is what I see in the log file and this is all over... :)

Setting LD_LIBRARY_PATH to
.:/usr/local/src/trunk/objdir32/sparc-linux/./libstdc++-v3/src/.libs:/usr/local/src/trunk/objdir32/gcc:/usr/local/src/trunk/objdir32/gcc:.:/usr/local/src/trunk/objdir32/sparc-linux/./libstdc++-v3/src/.libs:/usr/local/src/trunk/objdir32/gcc:/usr/local/src/trunk/objdir32/gcc:.:/usr/local/src/trunk/objdir32/sparc-linux/./libstdc++-v3/src/.libs:/usr/local/src/trunk/objdir32/gcc
./bitfield-4.exe: error while loading shared libraries: libobjc.so.1:
cannot open shared object file: No such file or directory
FAIL: obj-c++.dg/bitfield-4.mm execution test

Any ideas of what migt go wrong?

-- 
Cheers,

/ChJ


Gcc 3.0 and unnamed struct: incorrect offsets

2005-06-06 Thread Atul Talesara
Hello folks,
This might have already been addressed, but I
tried searching on GCC mailing list archives
http://gcc.gnu.org/lists.html#searchbox
and google before posting.

My test file:

$ cat gcc_prob.c
struct a {
struct {
struct {
int x, y;
};
struct {
int z;
};
};
}dummy;

int main(void)
{
dummy.x = 20;
dummy.z = 10;
}


My GCC version (cross-compiled to generate MIPS code):
bash-2.05b$ mips-elf-gcc --version
3.0

Disassembly of code:

bash-2.05b$ objdump  -D gcc_prob.o

-
 :
   0:   27bdffe8addiu   sp,sp,-24
   4:   afbf0014sw  ra,20(sp)
   8:   afbe0010sw  s8,16(sp)
   c:   0c00jal 0 
  10:   03a0f021moves8,sp
  14:   24020014addiu   v0,zero,20
  18:   3c01lui at,0x0
  1c:   ac22sw  v0,0(at)
  20:   2402000aaddiu   v0,zero,10
  24:   3c01lui at,0x0
  28:   ac22sw  v0,0(at)
  2c:   03c0e821movesp,s8
  30:   8fbf0014lw  ra,20(sp)
  34:   8fbe0010lw  s8,16(sp)
  38:   03e8jr  ra
  3c:   27bd0018addiu   sp,sp,24

-
Instructions at locations 0x14 - 0x1c correspond to:
dummy.x = 20;

And, instructions at locations 0x20 - 0x28 correspond to:
dummy.z = 10;

Puzzling part is both stores happen at offset of '0':
  1c:   ac22sw  v0,0(at)
... ... ...
  28:   ac22sw  v0,0(at)

I wanted to know if this is a bug, and if yes, then probably
I might hunt for a patch. If not, then can you please point
what's wrong? Or explain this not-so-obvious behaviour?
Is it something specific to MIPS arch?

BTW, with x86 GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)
I do see correct offsets!

TIA.

Regards,
Atul T
http://the-shaolin.blogspot.com/
--
Computers are like air conditioners---
They stop working properly if you open WINDOWS!
--


Re: increase in code size with gcc3.2

2005-06-06 Thread Milind Katikar

I am sorry for this another mail. I forget to add c to

gcc@gcc.gnu.org

> You didn't mention what those switches are.  
I am using following options
1)Some –D options these are source specific defines.
2)Some –I options for specifying include files.
3) –Wall
4)Os (also tried O4)

> Also, I gcc 3.2 is not longer
> maintained, so you should try GCC 3.4 or preferably
> GCC 4.0.

We have plans to move towards GCC 4.0 however
development is currently using GCC 3.2 and thus it
would be best if Some immediate solution for GCC 3.2
may be worked out. 

Currently as a tools developer I'm working towards
promoting new generation of GCC where GCC 2.91.5 is
being used. I have been able to port and test GCC 3.2
and convinced the development to move to it.

At most of the places there is gain in the code
generated however in a few binaries the source bloat.
unfortunately these binaries are the part of very
small project which is very very memory starved. If
this is not worked out then my attempts to push newer
versions of GCC would suffer a sever setback. 
Moreover this target (SPARClet) had been deprecated
after GCC 3.2. Moving immediately to higher version
would not anyway be possible.

I would highly appreciate if you can give me any
pointers to make the size reduction in GCC 3.2
consistent. 

Thanks,

Milind




__ 
Discover Yahoo! 
Have fun online with music videos, cool games, IM and more. Check it out! 
http://discover.yahoo.com/online.html


Re: Making GCC faster

2005-06-06 Thread Karel Gardas

On Mon, 6 Jun 2005, Sam Lauber wrote:


There has been a lot of work recently on making GCC output faster code.  But
GCC isn't very fast.  On my slow 750MHz Linux box (which the PIII in it is now
R.I.P), it took a whole night to compile 3.4.3.


The memory of your box is probably too small, the CPU is IMHO OK. FYI: 
IIRC I've been doing 3.4.x builds on my 1GHz PIII + 512MB RAM for around 
30 minutes (c/c++) and for around 1.5 hour full build.


Cheers,
Karel
--
Karel Gardas  [EMAIL PROTECTED]
ObjectSecurity Ltd.   http://www.objectsecurity.com