Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Thorsten Glaser
Zack Weinberg dixit:

>This could be made substantially easier if libgcc moved to the top
>level.  You wanna help out with that?

What about crtstuff?

//mirabile



Ada test suite

2005-04-28 Thread Florian Weimer
Some time ago, someone posted a patch which provided beginnings of a
general-purpose Ada test suite infrastructure (in addition to the
current ACATS tests, which cannot be used for regression tests).  The
patch was not integrated, and I can't find it at the moment. 8-(

Does anybody know which patch I'm talking about?


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Karel Gardas
On Wed, 27 Apr 2005, Daniel Berlin wrote:

> If you want a faster compiler, it's hard work.  It means not adding
> features because the design isn't a good one, *even if the user would
> still find it useful*. People aren't willing to do this.  It means lots
> and lots of profiling, and taking care of little inefficiencies.  All I
> ever see people suggest is magic bullets.

Daniel,

I can not agree with it at all! At least for our C++ code, I've seen
compiler speedup from GCC 3.2, which was the most slowest compiler, 3.3
was faster because of better memory heuristics, 3.4 was faster probably
because of better parser and 4.0 is faster because of a work of many
developers here whom I would like to say Thank You!

Imagine that G++ is now faster at least about 50% (at least!), when
comparing 3.2 and 4.0.0, not just talking about great 25% speedup between
3.4.x and 4.0.0!

Conclusion: people are willing to investigate compiler slowness and even
they add new features to the compiler itself.

Thank you all for writing GCC!
Karel
--
Karel Gardas  [EMAIL PROTECTED]
ObjectSecurity Ltd.   http://www.objectsecurity.com



Re: Ada test suite

2005-04-28 Thread Arnaud Charlet
> Some time ago, someone posted a patch which provided beginnings of a
> general-purpose Ada test suite infrastructure (in addition to the
> current ACATS tests, which cannot be used for regression tests).  The

Note that this is technically incorrect: the ACATS infrastructure can
be used for regression tests, as long as they are using the few acats packages
to report success/failure. See the directory tests/gcc.

> patch was not integrated, and I can't find it at the moment. 8-(
> 
> Does anybody know which patch I'm talking about?

There is a GCC PR about it, so it should be fairly easy to find.
I have no knowledge on expect nor the dejagnu framework, so that's why I
haven't commented on the proposed patch, otherwise the general idea is fine.

Arno


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Andrew Haley
Paul Koning writes:
 > > "Andrew" == Andrew Pinski <[EMAIL PROTECTED]> writes:
 > 
 >  >> However, I can always tell when a GCC build has hit the libjava
 >  >> build -- that's when the *whole system* suddenly slows to a crawl.
 >  >> Maybe it comes from doing some processing on 5000 foo.o files all
 >  >> at once... :-(
 > 
 >  Andrew> But that is not GCC fault that binutils cannot handle that
 >  Andrew> load.
 > 
 > Perhaps not.  But perhaps there are workarounds that allow the gcc
 > build to do its job without using binutils in a way that stresses it
 > beyond its ability to cope.
 > 
 > Matt's time output suggests that massive pagefaulting is a big issue

I'm sure that's true.  The working set exceeds the amount of RAM in
the system, leading to near worst-case behaviour.

 > -- and it wouldn't surprise me if the libjava build procedure were a
 > major contributor there.

Yes.  This is a profile of the libgcj build.  The single biggest user
of CPU is the libtool shell script, which uses more than a quarter of
the total (non-kernel) CPU time.  However, it's important not to be
misled -- I'm sure the linker causes a huge amount of disk activity,
so it's not just CPU time that is important.

Having said that, I suspect that the single biggest improvement to the
libgcj build time would be either to remove the libtool shell script
altogether or find some way to reduce its use or make it faster.


CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
  TIMER:0|
  samples|  %|
--
  1770547 63.0596 no-vmlinux
   415708 14.8058 libc-2.3.4.so
   259889  9.2562 ltsh
   257355  9.1659 jc1
22111  0.7875 cc1plus
20260  0.7216 as
19289  0.6870 ld-2.3.4.so
10502  0.3740 make
 5921  0.2109 sed
 5163  0.1839 libbfd-2.15.92.0.2.so
 2855  0.1017 gcj
 2724  0.0970 cc1
 2218  0.0790 libz.so.1.2.1.2
 2154  0.0767 grep
 2019  0.0719 xterm
 1864  0.0664 ld

Andrew.


RE: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Dave Korn
Original Message
>From: Marcin Dalecki
>Sent: 28 April 2005 02:58

> On 2005-04-27, at 22:54, Karel Gardas wrote:
>> 
>> Total Physical Source Lines of Code (SLOC)= 2,456,727
>> Development Effort Estimate, Person-Years (Person-Months) = 725.95
>>  (8,711.36) (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
>> Schedule Estimate, Years (Months) = 6.55 (78.55)
>>  (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
>> Estimated Average Number of Developers (Effort/Schedule)  = 110.90
>> Total Estimated Cost to Develop   = $ 98,065,527
>>  (average salary = $56,286/year, overhead = 2.40).
>> Please credit this data as "generated using 'SLOCCount' by David A.
>> Wheeler."
> 
> One question remains open: Who is Mr. Person?


  What makes you think it's not Ms. Person?  Chauvinist!

;)


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Ada test suite

2005-04-28 Thread Florian Weimer
* Arnaud Charlet:

>> Some time ago, someone posted a patch which provided beginnings of a
>> general-purpose Ada test suite infrastructure (in addition to the
>> current ACATS tests, which cannot be used for regression tests).  The
>
> Note that this is technically incorrect: the ACATS infrastructure can
> be used for regression tests, as long as they are using the few acats packages
> to report success/failure. See the directory tests/gcc.

I thought that there were some reservations about changing the ACATS
test suite.

>> patch was not integrated, and I can't find it at the moment. 8-(
>> 
>> Does anybody know which patch I'm talking about?
>
> There is a GCC PR about it, so it should be fairly easy to find.

Yes, indeed, it's PR 18692.

> I have no knowledge on expect nor the dejagnu framework, so that's
> why I haven't commented on the proposed patch, otherwise the general
> idea is fine.

So how we can make sure that this work is not lost?  Who would be in a
position to approve a patch?


Re: Ada test suite

2005-04-28 Thread Arnaud Charlet
> I thought that there were some reservations about changing the ACATS
> test suite.

I do not remember anything like that. Also, we're not talking about changing
the ACATS test suite, but rather expanding it.

> So how we can make sure that this work is not lost?  Who would be in a
> position to approve a patch?

I am sure the usual procedures apply here: ping on gcc-patches for instance.

Arno


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Marcin Dalecki
On 2005-04-28, at 12:03, Dave Korn wrote:
Original Message
From: Marcin Dalecki
Sent: 28 April 2005 02:58

On 2005-04-27, at 22:54, Karel Gardas wrote:
Total Physical Source Lines of Code (SLOC)= 2,456,727
Development Effort Estimate, Person-Years (Person-Months) = 725.95
 (8,711.36) (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months) = 6.55 
(78.55)
 (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule)  = 110.90
Total Estimated Cost to Develop   = $ 
98,065,527
 (average salary = $56,286/year, overhead = 2.40).
Please credit this data as "generated using 'SLOCCount' by David A.
Wheeler."
One question remains open: Who is Mr. Person?

  What makes you think it's not Ms. Person?  Chauvinist!
Oh indeed... Yes I missed it: It's Lady Wheeler!


Re: Ada test suite

2005-04-28 Thread Laurent GUERBY
On Thu, 2005-04-28 at 09:45 +0200, Florian Weimer wrote:
> Some time ago, someone posted a patch which provided beginnings of a
> general-purpose Ada test suite infrastructure (in addition to the
> current ACATS tests, which cannot be used for regression tests).  The
> patch was not integrated, and I can't find it at the moment. 8-(
> 
> Does anybody know which patch I'm talking about?

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18692
http://gcc.gnu.org/ml/gcc-patches/2004-11/msg01862.html

Plus this:

[Ada] Run ACATS tests through an expect script
http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02484.html
http://gcc.gnu.org/ml/gcc-patches/2004-12/msg00166.html

If Arnaud doesn't feel knowledgeable enough to review/approve
dejagnu code, why don't we name Jim maintainer for this? 

That would at least avoid to have infrastructure patch stuck
for five monthes without review :).

Laurent

PS: I know nothing about dejagnu either.



[BENCHMARK] comparing GCC 3.4 and 4.0 on an AMD Athlon-XP 2500+

2005-04-28 Thread Rene Rebe
Hi all,
I have some preleminary benchmark results comparing 3.4(.3) with 4.0.0, 
including some optimization option permuations.

  http://exactcode.de/rene/hidden/gcc-article/2005-gcc-4.0/stat2-rt.png
  http://exactcode.de/rene/hidden/gcc-article/2005-gcc-4.0/stat2-bt.png
rt = runtime
bt = buildtime
-lu == -funroll-loops
-uaat == -funit-at-a-time
-vect == -ftree-vectorize
-oft == -fomit-frame-pointer
-x ==  -funroll-loops -fpeel-loops -fpeel-loops -funswitch-loops
   -ftree-vectorize -ftracer -fomit-frame-pointer
-x-uaat == -x + -uaat
A short summary:
4.0 seems to be very good at modern C++ code, especially involving a lot 
of templates.

The -Os switch's generated code speed degenerated by magnitudes for C++ 
code (look at Botan and tramp3d).

4.0 seems to have problems catching up at C code where 3.4(.3) was 
previously yielding good results - e.g openssl, libmad and gnupg seem to 
be a challenge for GCC.

Of course all hand written assembly was disabled in those benchmarks, to 
review what GCC is able to genereate out of the generic C code ...

Feedback welcome - yours,
--
René Rebe - Rubensstr. 64 - 12157 Berlin (Europe / Germany)
http://www.exactcode.de/ | http://www.t2-project.org/
+49 (0)30  255 897 45


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Richard Earnshaw
On Wed, 2005-04-27 at 20:57, Steven Bosscher wrote:
> On Wednesday 27 April 2005 17:45, Matt Thomas wrote:
> > > The features under discussion are new, they didn't exist before.
> >
> > And because they never existed before, their cost for older platforms
> > may not have been correctly assessed.
> 
> If someone had cared about them, it would have been noticed earlier.
> But since _nobody_ has complained before you, I guess we can conclude
> that by far the majority if GCC users are quite happy with the cost
> assesments that were made.

This is false.  I've been complaining (at various levels of volume) ever
since we switched to using the garbage collector.[1]

R.

[1] I don't think the Garbage Collector is the only source of slowdown. 
It was just the change that tipped the balance from not-very good to
insane.  Unfortunately, things have continued to go down-hill from
there.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Richard Earnshaw
On Wed, 2005-04-27 at 21:55, Andrew Pinski wrote:
> > However, I can always tell when a GCC build has hit the libjava build
> > -- that's when the *whole system* suddenly slows to a crawl.  Maybe
> > it comes from doing some processing on 5000 foo.o files all at
> > once... :-(
> 
> But that is not GCC fault that binutils cannot handle that load.
> 
> -- Pinski

It's not as simple as just saying "binutils should be able to cope with
any number of objects thrown at it".  Part of the problem is that 5000
object files exceeds the system limits of the host machine (eg command
line length, etc).

R.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Daniel Jacobowitz
On Thu, Apr 28, 2005 at 07:31:06AM +, Thorsten Glaser wrote:
> Zack Weinberg dixit:
> 
> >This could be made substantially easier if libgcc moved to the top
> >level.  You wanna help out with that?
> 
> What about crtstuff?

Yes, they should be moved at the same time; I consider them closer to
part of libgcc than to part of gcc proper.

-- 
Daniel Jacobowitz
CodeSourcery, LLC


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Richard Earnshaw
On Thu, 2005-04-28 at 02:40, Tom Tromey wrote:
> > "Paul" == Paul Koning <[EMAIL PROTECTED]> writes:
> 
> Paul> Maybe.  Then again, maybe there are real problems here.  The ranlib
> Paul> one was already mentioned.  And I wonder if libjava really needs to
> Paul> bring the host to its knees, as it does.
> 
> Killing machines is only a secondary goal, if that's what you mean ;-)
> 
> The bad news is that libjava is only going to grow.
> 
> On the other hand, while I haven't measured it myself, I've heard that
> a lot of the time in the libjava build is spent in libtool (versus
> plain old ld).  Perhaps that can be alleviated somehow.
> 
Splitting libjava into multiple libraries would be a good start.

It would probably be a good thing for application start-up time to when
using shared libs.

R.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Andrew Haley
Richard Earnshaw writes:
 > On Wed, 2005-04-27 at 21:55, Andrew Pinski wrote:
 > > > However, I can always tell when a GCC build has hit the libjava build
 > > > -- that's when the *whole system* suddenly slows to a crawl.  Maybe
 > > > it comes from doing some processing on 5000 foo.o files all at
 > > > once... :-(
 > > 
 > > But that is not GCC fault that binutils cannot handle that load.
 > > 
 > > -- Pinski
 > 
 > It's not as simple as just saying "binutils should be able to cope with
 > any number of objects thrown at it".  Part of the problem is that 5000
 > object files exceeds the system limits of the host machine (eg command
 > line length, etc).

If ld can't accept a list of files from a stream but is instead
limited by command line length, then that *is* the fault of ld.

Andrew.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Richard Earnshaw
On Thu, 2005-04-28 at 14:35, Andrew Haley wrote:
> Richard Earnshaw writes:
>  > On Wed, 2005-04-27 at 21:55, Andrew Pinski wrote:
>  > > > However, I can always tell when a GCC build has hit the libjava build
>  > > > -- that's when the *whole system* suddenly slows to a crawl.  Maybe
>  > > > it comes from doing some processing on 5000 foo.o files all at
>  > > > once... :-(
>  > > 
>  > > But that is not GCC fault that binutils cannot handle that load.
>  > > 
>  > > -- Pinski
>  > 
>  > It's not as simple as just saying "binutils should be able to cope with
>  > any number of objects thrown at it".  Part of the problem is that 5000
>  > object files exceeds the system limits of the host machine (eg command
>  > line length, etc).
> 
> If ld can't accept a list of files from a stream but is instead
> limited by command line length, then that *is* the fault of ld.

A certain proverb relating to bad workmen and tools springs to mind
here.  

I think we really need to consider whether requiring ld to support
linking of 5000 object files might mean poor library design.

R.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Andreas Schwab
Andrew Haley <[EMAIL PROTECTED]> writes:

> If ld can't accept a list of files from a stream but is instead
> limited by command line length, then that *is* the fault of ld.

You can always use a linker script.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: FW: GCC Cross Compiler for cygwin

2005-04-28 Thread E. Weddington
James E Wilson wrote:
Amir Fuhrmann wrote:
../gcc-3.4.3/configure --exec-prefix=/usr/local --program-prefix=ppc-
--with-stabs -with-cpu=603 --target=powerpc-eabi --with-gnu-as=ppc-as
--with-gnu-ld=ppc-ld --enable-languages=c,c++ 

The suggestion to look at Dan Kegel's crosstool is a good one, but 
crosstool only handles cross compilers to linux, and hence isn't 
relevant here.
There have been patches to it for building on Cygwin, plus the 
occasional success story on Cygwin, IIRC. (Perhaps Dan can comment). 
IIRC, there are patches to crosstool for newlib too.

I don't know if the specific combination will work, but one could always 
try. At least it's sometimes a better starting point for building a lot 
of cross-toolchains.

Eric


Re: RFC: ms bitfields of aligned basetypes

2005-04-28 Thread Joern RENNECKE
A testcase to trigger the assert was:
typedef _Bool Tal16bool __attribute__ ((aligned (16)));
struct S49
{
 Tal16bool a:1;
};
and it turns out that the underlying problem is actually in the 
general-purpose
field layout code.  Both known_align and actual_align are calculated as
BIGGEST_ALIGNMENT if the offset of the field is zero.  However, the
correct alignment in this case is the alignment of the record, which may be
smaller or larger than BIGGEST_ALIGNMENT, depending on the
alignment of the fields seen so far.



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Lars Segerlund

 I have to agree with Richard's assessment, gcc is currently on the verge of 
being
 unusable in many instances.
 If you have a lot of software to build and have to do complete rebuilds it's 
painful,
 the binutils guys have a 3x speedup patch coming up, but every time there is a 
speedup
 it gets eaten up.

 gcc seem's to be moving more and more data around and does more and more 
temporary
 allocations of huge chunks of memory. Sometimes a concearn pop's up and some 5 
to 
 10 percent is gained only to be lost again.

 My opinion on the point is that if there was a 100% speedup of gcc it would be 
very good
 but not enough since it has slowed down so much mainly from memory usage.
 I used to do kernel compiles in an hour or two on a pentium laptop, and now I 
can
 forget it since it's memory is soon exhausted and it goes into swap.

 I have never done any 'memory profiling' but I think it might be time to give 
it a
 shot, does anybody have any hints on how to go about something like this ?

 It should be enough to have a look at the places where were searching for 
information
 that we already have had, or to make any kind of search more efficient if 
possible.
 Perhaps it's possible to graft some kind of indexing scheme on the worst 
offenders,
 so that some O(n^2) can get to O(1 ) or something.

 The main problem seem's to be that almost any efforts to point out a single 
scapegoat
 have not been very successful.

 / regards, Lars Segerlund.


On Thu, 28 Apr 2005 14:24:11 +0100
Richard Earnshaw <[EMAIL PROTECTED]> wrote:

> On Wed, 2005-04-27 at 20:57, Steven Bosscher wrote:
> > On Wednesday 27 April 2005 17:45, Matt Thomas wrote:
> > > > The features under discussion are new, they didn't exist before.
> > >
> > > And because they never existed before, their cost for older platforms
> > > may not have been correctly assessed.
> > 
> > If someone had cared about them, it would have been noticed earlier.
> > But since _nobody_ has complained before you, I guess we can conclude
> > that by far the majority if GCC users are quite happy with the cost
> > assesments that were made.
> 
> This is false.  I've been complaining (at various levels of volume) ever
> since we switched to using the garbage collector.[1]
> 
> R.
> 
> [1] I don't think the Garbage Collector is the only source of slowdown. 
> It was just the change that tipped the balance from not-very good to
> insane.  Unfortunately, things have continued to go down-hill from
> there.


gcc@gcc.gnu.org

2005-04-28 Thread Richard Guenther

This is sort of the "final" state I ended up trying to teach
the C frontend not to emit array-to-pointer decay as
  ADDR_EXPR (element-type*, array)
but as
  ADDR_EXPR (element-type*, ARRAY_REF (element-type, array, 0))
for both type correctness and for possible simplifications of
fold and the tree optimizers that will no longer have to handle
&a as &a[0] specially if trying to optimize ARRAY_REF operations.

While the patch to teach the C frontend to emit &a[0] is not
complicated (see the c-typeck.c (default_function_array_conversion)
patch chunk), there is a lot of fall-out in the frontend and in
the optimizers.

In the patch you will find intermixed (fold-const.c parts) code
to fold more of address calculation with &a[i] especially in
the case of char arrays which gets excercised quite a lot in
the C testsuite.

I also came along problems in fold_indirect_ref_1 (see also separate
post) and fold_stmt.

The gimplify.c chunk was applied to check if C no longer creates
those "invalid" trees.  Until other frontends are fixed, this
is not safe.


The patch was bootstrapped and tested on i686-pc-linux-gnu for
the C language with the only remaining regression being c99-init-4.c
(I didn't manage to find the place to fix).


I'll stop right here until I get encouragement and help from other
frontend maintainers to maybe fix all of the frontends (I know of
at least gfortran that needs to be fixed) to really have a benefit
of the patch.

Other suggestions?  Like f.e. how to fix c99-init-4.c?


Thanks,
Richard.
2005-04-28  Richard Guenther  <[EMAIL PROTECTED]>

* builtins.c (fold_builtin_constant_p): Handle constant
strings of the form &str[0].
* c-format.c (check_format_arg): Handle &a[offset] as valid
format string.
* c-typeck.c (default_function_array_conversion): Emit
array-to-pointer decay as &a[0].
(build_unary_op): Emit &a[i] in its original form rather
than splitting it to a + i.
* expr.c (string_constant): Handle &"Foo"[0] as string
constant.
* tree-ssa-ccp.c (fold_stmt): Do not use results from fold
that are not TREE_CONSTANT.
* fold-const.c (extract_array_ref): Remove.
(try_move_mult_to_index): Handle folding of &a[i] OP x
with x being constant or array element size one.  Fix type
correctness.
(fold_binary): Dispatch to try_move_mult_to_index in more
cases.  Simplify folding of comparisons of ARRAY_REFs.
(fold_indirect_ref_1): Avoid removing NOP_EXPRs with type
qualifiers like const.

* gimplify.c (check_pointer_types_r): ADDR_EXPR no longer
can be of type array element for array operands.

Index: gcc/builtins.c
===
RCS file: /cvs/gcc/gcc/gcc/builtins.c,v
retrieving revision 1.460
diff -c -3 -p -r1.460 builtins.c
*** gcc/builtins.c  23 Apr 2005 21:27:24 -  1.460
--- gcc/builtins.c  28 Apr 2005 13:24:21 -
*** fold_builtin_constant_p (tree arglist)
*** 6320,6326 
|| (TREE_CODE (arglist) == CONSTRUCTOR
  && TREE_CONSTANT (arglist))
|| (TREE_CODE (arglist) == ADDR_EXPR
! && TREE_CODE (TREE_OPERAND (arglist, 0)) == STRING_CST))
  return integer_one_node;
  
/* If this expression has side effects, show we don't know it to be a
--- 6320,6329 
|| (TREE_CODE (arglist) == CONSTRUCTOR
  && TREE_CONSTANT (arglist))
|| (TREE_CODE (arglist) == ADDR_EXPR
! && (TREE_CODE (TREE_OPERAND (arglist, 0)) == STRING_CST
! || (TREE_CODE (TREE_OPERAND (arglist, 0)) == ARRAY_REF
! && integer_zerop (TREE_OPERAND (TREE_OPERAND (arglist, 0), 1))
! && TREE_CODE (TREE_OPERAND (TREE_OPERAND (arglist, 0), 0)) == 
STRING_CST
  return integer_one_node;
  
/* If this expression has side effects, show we don't know it to be a
Index: gcc/c-format.c
===
RCS file: /cvs/gcc/gcc/gcc/c-format.c,v
retrieving revision 1.74
diff -c -3 -p -r1.74 c-format.c
*** gcc/c-format.c  26 Apr 2005 23:57:55 -  1.74
--- gcc/c-format.c  28 Apr 2005 13:24:22 -
*** check_format_arg (void *ctx, tree format
*** 1260,1265 
--- 1260,1269 
return;
  }
format_tree = TREE_OPERAND (format_tree, 0);
+   if (TREE_CODE (format_tree) == ARRAY_REF
+   && host_integerp (TREE_OPERAND (format_tree, 1), 0)
+   && (offset = tree_low_cst (TREE_OPERAND (format_tree, 1), 0)) >= 0)
+ format_tree = TREE_OPERAND (format_tree, 0);
if (TREE_CODE (format_tree) == VAR_DECL
&& TREE_CODE (TREE_TYPE (format_tree)) == ARRAY_TYPE
&& (array_init = decl_constant_value (format_tree)) != format_tree
Index: gcc/c-typeck.c
===
RCS file: /cvs/gcc/gcc/gcc/c-typeck.c,v
retrieving revision 1.438

Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Gunther Nikl
On Wed, Apr 27, 2005 at 08:05:39AM -0700, Matt Thomas wrote:
> Am I the only person who has attempted to do a native bootstrap on a
> system as slow as a M68k?

  I am using an Amiga with [EMAIL PROTECTED] myself. My last GCC bootstrap on
  that machine was done in 1999 for GCC 2.95.2 and it took several hours.
  I never tried to boostrap newer GCC version because I fear that it would
  take much too long. I only did a non-optimized non-bootstrap build of
  3.2.2 with 2.95.2 as build compiler on that machine. I have to cross-build
  GCC3 and newer to use them on this machine. Anything else wouldn't work.
  FWIW, I experienced a constant slowdown of the C compiler with every
  release since 2.7. While C is still usable, C++ with templates is not.

  Gunther


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Joel Sherrill <[EMAIL PROTECTED]>
Peter Barada wrote:
Well, yes.  1 second/file is still slow!  I want "make" to complete
instantaneously!  Don't you?

Actually I want it to complete before I even start, but I don't want
to get too greedy. :)
What's really sad is that for cross-compilation of the toolchain, we
have to repeat a few steps (build gcc twice, build glibc twice)
because glibc and gcc assume that a near-complete environment is
available(such as gcc needing headers, and glibc needing -lgcc-eh), so
even really fast machines(2.4Ghz P4) take an hour to do a cross-build
from scratch. 
That sounds comparable to the time required to build RTEMS toolsets.  I 
just looked at the timestamp on the build logs for a gcc 4.0.0 CVS build 
with newlib 1.13.0 and it is on the order of 60-90 minutes per target on 
a 2.4 Ghz P4 w/512 MB RAM.  This is just C and C++ and the variance is 
probably mostly due to the number of multilibs.

I checked build logs of gcc 3.3.5 with newlib 1.13 from December on the 
same machine and times were comparable so it isn't a recent slowdown.

When I built my native 4.0.0 on this machine, I did notice that 
compiling the Java libraries took long enough to notice and when I did a 
top, I noticed that gjc was running for 30+ seconds of CPU time with RSS 
on the order of 150 MB.  Another time I noticed that sh had taken 90 
seconds and that was an invocation of libtool with a very long command line.

Here is the configure command and output of time on this machine for a 
gcc 4.0.0 build:

../gcc-4.0.0/configure --prefix=/opt/gcc-4.0.0 
--enable-languages=c,c++,ada,java,objc

4271.94user 685.49system 1:31:18elapsed 90%CPU (0avgtext+0avgdata 
0maxresident)k0inputs+0outputs (33176183major+40970199minor)pagefaults 
0swaps

A 2.4 Ghz P4 isn't what I would consider an obsolete machine and it took 
90 minutes for "make" -- not a full bootstrap.

--joel


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Gunther Nikl
On Wed, Apr 27, 2005 at 04:40:29PM -0700, Zack Weinberg wrote:
> Daniel Berlin <[EMAIL PROTECTED]> writes:
> 
> > On Wed, 2005-04-27 at 15:13 -0700, Stan Shebs wrote:
> >> Steven Bosscher wrote:
> >> >If someone had cared about them, it would have been noticed
> >> >earlier.  But since _nobody_ has complained before you, I guess we
> >> >can conclude that by far the majority if GCC users are quite happy
> >> >with the cost assesments that were made.
> >> >
> >> No, there have been plenty of complaints, but the GCC mailing lists
> >> have, shall we say, a "reputation", and a great many users will not
> >> post to them,
> >
> > I've never in my life heard this from another mailing list, and i
> > contribute to a *great* many open source projects.
> 
> I have seen such complaints.  Not about bootstrap times, no, that only
> affects people who compile the compiler; but the more general case of
> 'gcc takes forever to compile this program' does appear on a regular
> basis.

  Maybe there are less/no complaints about bootstrap times, because people
  that are able to make a bootstrap know that complaining doesn't help. My
  primary target is m68k and I never attempted a bootstrap of GCC3 there
  because it would take much to much time. Now I got used to cross-build
  GCC (only C and C++) for m68k-amigaos. And since this target isn't in
  the official tree its even more painful to inquire the list.

> I do also think that the amount of ridicule heaped on people who come
> to the gcc lists is, in general, too high.  People should not be
> ridiculed for complaining that the compiler is slow, even if they are
> insisting on using vintage hardware.

  IMHO GCC3 is better than GCC2 and thus its worthwile to use it even
  on "vintage" hardware.

> It is slow, even on fast hardware; it's just easier to see that on
> slow hardware.

  I mainly use C and thats acceptable with GCC3 but eg. C++ with templates
  is really slow on my m68k Amiga.

> Rather more importantly, people should not be ridiculed for submitting
> bug reports, even if they are wrong.  I suspect the bad public image
> that Stan refers to, has more to do with this than anything else.

  FWIW, I fully agree.

  Gunther


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Peter Barada

>> What's really sad is that for cross-compilation of the toolchain, we
>> have to repeat a few steps (build gcc twice, build glibc twice)
>> because glibc and gcc assume that a near-complete environment is
>> available(such as gcc needing headers, and glibc needing -lgcc-eh), so
>> even really fast machines(2.4Ghz P4) take an hour to do a cross-build
>> from scratch. 
>
>That sounds comparable to the time required to build RTEMS toolsets.  I 
>just looked at the timestamp on the build logs for a gcc 4.0.0 CVS build 
>with newlib 1.13.0 and it is on the order of 60-90 minutes per target on 
>a 2.4 Ghz P4 w/512 MB RAM.  This is just C and C++ and the variance is 
>probably mostly due to the number of multilibs.

This is for a m68k-linux build (with coldfire-linux config for glibc),
and its only the C compiler, so adding C++ will obvioulsy make it take
longer.

>A 2.4 Ghz P4 isn't what I would consider an obsolete machine and it took 
>90 minutes for "make" -- not a full bootstrap.

Even on a 3.0Ghz P4 with HT, 1Gb DDR and a hardware RAID with SATA
drives it takes about 30 minutes so there's a *lot* of work going on,
and I'd call that near cutting-edge.

-- 
Peter Barada
[EMAIL PROTECTED]


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Andrew Haley
Andreas Schwab writes:
 > Andrew Haley <[EMAIL PROTECTED]> writes:
 > 
 > > If ld can't accept a list of files from a stream but is instead
 > > limited by command line length, then that *is* the fault of ld.
 > 
 > You can always use a linker script.

Yeah, good point.  libtool seems to go to extraordinary lengths to
avoid doing so, I presume because it isn't portable.

Andrew.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread David Edelsohn
> Andrew Haley writes:

Andrew> Yeah, good point.  libtool seems to go to extraordinary lengths to
Andrew> avoid doing so, I presume because it isn't portable.

Current libtool does allow a list of files, but the version used
by GCC is not recent.

David



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Daniel Berlin
On Thu, 2005-04-28 at 11:58 -0400, Peter Barada wrote:

> This is for a m68k-linux build (with coldfire-linux config for glibc),
> and its only the C compiler, so adding C++ will obvioulsy make it take
> longer.
> 
> >A 2.4 Ghz P4 isn't what I would consider an obsolete machine and it took 
> >90 minutes for "make" -- not a full bootstrap.
> 
> Even on a 3.0Ghz P4 with HT, 1Gb DDR and a hardware RAID with SATA
> drives it takes about 30 minutes so there's a *lot* of work going on,
> and I'd call that near cutting-edge.
> 


1. make bootstrap on a 2.4ghz p4 takes 90 minutes for me as of
yesterday.
2. Building XLC with (C,C++,Fortran) and a single backend takes roughly
the same time as building GCC.  And they aren't three staging, AFAIK.

--Dan



Re: New gcc 4.0.0 warnings seem spurious

2005-04-28 Thread Vincent Lefevre
On 2005-04-27 14:30:01 -0700, Mike Stump wrote:
> On Apr 27, 2005, at 5:15 AM, Neil Booth wrote:
> >Even better, you can turn of the warning with a cast, making your
> >intent explicit to the compiler, so there's every reason to have
> >it on by default.
> 
> And, if you don't like casts, you can (...)&255 or whatever.

This is quite dirty if the type comes from an internal structure
of some library, where the size of the integer may depend on the
architecture or may change in the future... Do you see a better
solution?

-- 
Vincent Lefèvre <[EMAIL PROTECTED]> - Web: 
100% accessible validated (X)HTML - Blog: 
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


gcc 4.0.0 build status on AIX 5.2

2005-04-28 Thread Eli Ben-Shoshan
Output from running srcdir/config.guess:

powerpc-ibm-aix5.2.0.0

I have not installed this version of gcc yet so here is the output from xgcc in
objdir/gcc/xgcc -v:

Using built-in specs.
Target: powerpc-ibm-aix5.2.0.0
Configured with: ../gcc-4.0.0/configure --prefix=/usr/local --enable-threads=aix
--enable-languages=c,c++,java,objc --disable-nls
Thread model: aix
gcc version 4.0.0

This was built on an IBM p615 (power4) with AIX 5.2 ML05.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Daniel Berlin
On Wed, 2005-04-27 at 16:40 -0700, Zack Weinberg wrote:
> Daniel Berlin <[EMAIL PROTECTED]> writes:
> 
> > On Wed, 2005-04-27 at 15:13 -0700, Stan Shebs wrote:
> >> Steven Bosscher wrote:
> >> >If someone had cared about them, it would have been noticed
> >> >earlier.  But since _nobody_ has complained before you, I guess we
> >> >can conclude that by far the majority if GCC users are quite happy
> >> >with the cost assesments that were made.
> >> >
> >> No, there have been plenty of complaints, but the GCC mailing lists
> >> have, shall we say, a "reputation", and a great many users will not
> >> post to them,
> >
> > I've never in my life heard this from another mailing list, and i
> > contribute to a *great* many open source projects.
> 
> I have seen such complaints.  Not about bootstrap times, no, that only
> affects people who compile the compiler; but the more general case of
> 'gcc takes forever to compile this program' does appear on a regular
> basis.
> 
> I do also think that the amount of ridicule heaped on people who come
> to the gcc lists is, in general, too high.  People should not be
> ridiculed for complaining that the compiler is slow, even if they are
> insisting on using vintage hardware.  It is slow, even on fast
> hardware; it's just easier to see that on slow hardware.  

For people who use hardware most developers don't have access to, we
need preprocessed source and profiling data, or else there is no chance
of fixing it.  
It seems users of these platforms are not willing to provide this, even
when asked.  

Those people who have provided preprocessed source and profiling data
(8361, omniorb, etc) have had their compilation times sped up.
So whenever i hear that "we don't care about compilation time", i wonder
if the user has even put code in bugzilla that demonstrates the problem.
Most often, the answer is no, AFAICT.

--Dan





gcc@gcc.gnu.org

2005-04-28 Thread Joseph S. Myers
On Thu, 28 Apr 2005, Richard Guenther wrote:

> The patch was bootstrapped and tested on i686-pc-linux-gnu for
> the C language with the only remaining regression being c99-init-4.c
> (I didn't manage to find the place to fix).

You don't say how it regresses.  What diagnostic is it generating, what 
code is generating it (for diagnostics GCC can generate in more than one 
place), what are the relevant trees or other variables that caused the 
diagnostic to be reached and what were they without the patch to cause it 
not to be reached?

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Ian Lance Taylor
Andrew Haley <[EMAIL PROTECTED]> writes:

> If ld can't accept a list of files from a stream but is instead
> limited by command line length, then that *is* the fault of ld.

GNU ld won't currently read a list of files from stdin, but it will
read a list of files from a file.

For example, look at /usr/lib/libc.so on a typical GNU/Linux system.

Ian


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Andrew Haley
Ian Lance Taylor writes:
 > Andrew Haley <[EMAIL PROTECTED]> writes:
 > 
 > > If ld can't accept a list of files from a stream but is instead
 > > limited by command line length, then that *is* the fault of ld.
 > 
 > GNU ld won't currently read a list of files from stdin, but it will
 > read a list of files from a file.
 > 
 > For example, look at /usr/lib/libc.so on a typical GNU/Linux system.

Yes thanks, I've had that pointed out to me.  Apparently the real
issue here is that we have an older version of libtool in the gcc
tree.

Andrew.


Re: different address spaces

2005-04-28 Thread Paul Schlie
> Martin Koegler wrote:
> I have redone the implementation of the eeprom attribute in my prototype.
> It is now a cleaner solution, but requires larger changes in the core,
> but the changes in the core should not affect any backend/frontend, if
> it does not uses them (except a missing case in tree_copy_mem_area, which
> will cause an assertion to fail).
> ...
> +void
> +tree_copy_mem_area (tree to, tree from)
> 

Alternatively might it make sense to utilize the analogy defined in rtl.h?

  /* Copy the attributes that apply to memory locations from RHS to LHS.  */
  #define MEM_COPY_ATTRIBUTES(LHS, RHS)\
(MEM_VOLATILE_P (LHS) = MEM_VOLATILE_P (RHS),\
 MEM_IN_STRUCT_P (LHS) = MEM_IN_STRUCT_P (RHS),\
 MEM_SCALAR_P (LHS) = MEM_SCALAR_P (RHS),\
 MEM_NOTRAP_P (LHS) = MEM_NOTRAP_P (RHS),\
 MEM_READONLY_P (LHS) = MEM_READONLY_P (RHS),\
 MEM_KEEP_ALIAS_SET_P (LHS) = MEM_KEEP_ALIAS_SET_P (RHS),\
 MEM_ATTRS (LHS) = MEM_ATTRS (RHS))

As unfortunately GCC already inconsistently maintains and copies attributes
to memory references, it seems that introducing yet another function to do
so will only likely introduce more inconsistency.

Therefore wonder if it may be best to simply define MEM_ATTRS as you have
done, and then consistently utilize MEM_COPY_ATTRIBUTES to properly copy
attributes associated with memory references when new ones as may need to
be constructed (as all effective address optimizations should be doing, as
otherwise the attributes associated with the original reference will be
lost). I.e.:

Instead of: (as occasionally incorrectly done)
 rtx addr1 = copy_to_mode_reg (Pmode, XEXP (operands[1], 0));// some EA
 emit_move_insn (tmp_reg_rtx, gen_rtx_MEM (QImode, addr1)); // lose attribs
 emit_move_insn (addr1, gen_rtx_PLUS (Pmode, addr1, const1_rtx)); // new EA

Something like this is necessary:

 rtx addr1 = copy_to_mode_reg (Pmode, XEXP (operands[1], 0));// some EA
 rtx mem_1 = gen_rtx_MEM (QImode, addr1); // gen mem
 MEM_COPY_ATTRIBUTES (mem_1, operands[1]);// copy attributes
 emit_move_insn (tmp_reg_rtx, mem_1); // read value
 emit_move_insn (addr1, gen_rtx_PLUS (Pmode, addr1, const1_rtx)); // new EA






Should there be a GCC 4.0.1 release quickly?

2005-04-28 Thread Steven Bosscher
Hi,

PR21173 and its duplicates are a class of wrong-code and ICE bugs
in GCC 4.0.0.

In Bugzilla, PR21173 now has 3 duplicates, and there was another
example on this mailing list. That makes 5 users who have already
run into this rather serious bug. That is a lot, for a compiler
that has only just been released...

The bug causes a few serious problems on all targets (tree optimizer
bugs do that :-/):
- wrong code in the linux kernel 2.6.12-rc
- ICE compiling MySQL 4.1
- ICE compiling kdelibs 3.4

Apparently it is not very difficult to trigger this bug.

The bug has already been fixed on mainline and on the GCC 4.0 branch.
Should we release a new GCC 4.0 RSN and recommend that people do not
use GCC 4.0.0?  Or should we maybe add a notice somewhere about this
bug?

Thoughts?

Gr.
Steven





Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Joe Buck
On Wed, Apr 27, 2005 at 07:40:37PM -0600, Tom Tromey wrote:
> > "Paul" == Paul Koning <[EMAIL PROTECTED]> writes:
> 
> Paul> Maybe.  Then again, maybe there are real problems here.  The ranlib
> Paul> one was already mentioned.  And I wonder if libjava really needs to
> Paul> bring the host to its knees, as it does.
> 
> Killing machines is only a secondary goal, if that's what you mean ;-)
> 
> The bad news is that libjava is only going to grow.
> 
> On the other hand, while I haven't measured it myself, I've heard that
> a lot of the time in the libjava build is spent in libtool (versus
> plain old ld).  Perhaps that can be alleviated somehow.

Has anyone looked at oprofile data for the libjava build?



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Joe Buck
On Thu, Apr 28, 2005 at 12:09:35PM -0400, David Edelsohn wrote:
> > Andrew Haley writes:
> 
> Andrew> Yeah, good point.  libtool seems to go to extraordinary lengths to
> Andrew> avoid doing so, I presume because it isn't portable.
> 
>   Current libtool does allow a list of files, but the version used
> by GCC is not recent.

Is there a reason why we aren't using a recent libtool?



RE: FW: GCC Cross Compiler for cygwin

2005-04-28 Thread Amir Fuhrmann

Definitely helps!! but into the next problem, again, any help would be 
appreciated.

Amir

Configuring in powerpc-eabi/libiberty
configure: creating cache ./config.cache
checking whether to enable maintainer-specific portions of Makefiles... no
checking for makeinfo... makeinfo
checking for perl... perl
checking build system type... i686-pc-linux-gnu
checking host system type... powerpc-unknown-eabi
checking for powerpc-eabi-ar... ppc-ar
checking for powerpc-eabi-ranlib... ppc-ranlib
checking for powerpc-eabi-gcc...  /home/afuhrmann/gcc/BUILD/gcc/gcc/xgcc 
-B/home/afuhrmann/gcc/BUILD/gcc/gcc/ -B/usr/local/powerpc-eabi/bin/ 
-B/usr/local/powerpc-eabi/lib/ -isystem /usr/local/powerpc-eabi/include 
-isystem /usr/local/powerpc-eabi/sys-include
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether  /home/afuhrmann/gcc/BUILD/gcc/gcc/xgcc 
-B/home/afuhrmann/gcc/BUILD/gcc/gcc/ -B/usr/local/powerpc-eabi/bin/ 
-B/usr/local/powerpc-eabi/lib/ -isystem /usr/local/powerpc-eabi/include 
-isystem /usr/local/powerpc-eabi/sys-include accepts -g... yes
checking for  /home/afuhrmann/gcc/BUILD/gcc/gcc/xgcc 
-B/home/afuhrmann/gcc/BUILD/gcc/gcc/ -B/usr/local/powerpc-eabi/bin/ 
-B/usr/local/powerpc-eabi/lib/ -isystem /usr/local/powerpc-eabi/include 
-isystem /usr/local/powerpc-eabi/sys-include option to accept ANSI C... none 
needed
checking how to run the C preprocessor...  
/home/afuhrmann/gcc/BUILD/gcc/gcc/xgcc -B/home/afuhrmann/gcc/BUILD/gcc/gcc/ 
-B/usr/local/powerpc-eabi/bin/ -B/usr/local/powerpc-eabi/lib/ -isystem 
/usr/local/powerpc-eabi/include -isystem /usr/local/powerpc-eabi/sys-include -E
checking whether  /home/afuhrmann/gcc/BUILD/gcc/gcc/xgcc 
-B/home/afuhrmann/gcc/BUILD/gcc/gcc/ -B/usr/local/powerpc-eabi/bin/ 
-B/usr/local/powerpc-eabi/lib/ -isystem /usr/local/powerpc-eabi/include 
-isystem /usr/local/powerpc-eabi/sys-include and cc understand -c and -o 
together... yes
checking for an ANSI C-conforming const... yes
checking for inline... inline
checking whether byte ordering is bigendian... cross-compiling...
unknown
checking to probe for byte ordering... /usr/local/powerpc-eabi/bin/ld: warning: 
cannot find entry symbol _start; defaulting to 01800074
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.text+0xc4): In function 
`__eabi':
/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:232: undefined reference to `__init'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x8):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:146:
 undefined reference to `__SDATA_START__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0xc):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:148:
 undefined reference to `__SBSS_END__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x14):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:150:
 undefined reference to `__SDATA2_START__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x18):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:151:
 undefined reference to `__SBSS2_END__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x1c):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:152:
 undefined reference to `__GOT_START__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x28):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:155:
 undefined reference to `__GOT_END__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x2c):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:156:
 undefined reference to `__GOT2_START__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x30):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:157:
 undefined reference to `__GOT2_END__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x34):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:158:
 undefined reference to `__FIXUP_START__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x38):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:159:
 undefined reference to `__FIXUP_END__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x3c):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:163:
 undefined reference to `__CTOR_LIST__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x40):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:164:
 undefined reference to `__CTOR_END__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x44):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:165:
 undefined reference to `__DTOR_LIST__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x48):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:166:
 undefined reference to `__DTOR_END__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x4c):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:167:
 undefined reference to `__EXCEPT_START__'
/home/afuhrmann/gcc/BUILD/gcc/gcc/libgcc.a(eabi.o)(.got2+0x50):/home/afuhrmann/gcc/BUILD/gcc/gcc/eabi.S:171:
 undefined reference to `__EXCEPT_END__'
collect2: ld returned 1 exit status
unknown
configure: error: unknown endianess - sorry
/home/afuhrmann/gcc/gcc-3.4.3/libiberty/configure: line 3289: exit: please: 
numeric argument required
/home/afuhr

Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread David Edelsohn
> Andrew Haley writes:

Andrew> Yes thanks, I've had that pointed out to me.  Apparently the real
Andrew> issue here is that we have an older version of libtool in the gcc
Andrew> tree.

Any feature in libtool CVS is fair game to be backported to
libtool in GCC.  I am planning to backport a subset of the feature for
AIX.  I can backport it all, including the GNU ld support, if you want.

David



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread David Edelsohn
> Joe Buck writes:

Joe> Is there a reason why we aren't using a recent libtool?

Porting and testing effort to upgrade. 

David



Re: Should there be a GCC 4.0.1 release quickly?

2005-04-28 Thread Joe Buck
On Thu, Apr 28, 2005 at 06:45:10PM +0200, Steven Bosscher wrote:
> Hi,
> 
> PR21173 and its duplicates are a class of wrong-code and ICE bugs
> in GCC 4.0.0.
> 
> In Bugzilla, PR21173 now has 3 duplicates, and there was another
> example on this mailing list. That makes 5 users who have already
> run into this rather serious bug. That is a lot, for a compiler
> that has only just been released...

I'm not surprised to see a problem in anything with two 0's.  It's
the first release ever with tree-ssa.  It was inevitable that users
would find problems, and I certainly wouldn't use 4.0.0 for production
(distros that use it will of course add patches, including the one
for PR 21173).

> The bug has already been fixed on mainline and on the GCC 4.0 branch.
> Should we release a new GCC 4.0 RSN and recommend that people do not
> use GCC 4.0.0?  Or should we maybe add a notice somewhere about this
> bug?

We always knew we needed 4.0.1.  The problem, though, with a very quick
release to fix this one is that we could discover a second nasty one.

If this bug is as bad as you say, it's probably worth putting out 4.0.1
faster than Mark originally planned.  But tomorrow is too soon.

Also, we can point people to a patch for this PR so that they can get
further in their testing.



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Andrew Haley
Joe Buck writes:
 > On Wed, Apr 27, 2005 at 07:40:37PM -0600, Tom Tromey wrote:
 > > > "Paul" == Paul Koning <[EMAIL PROTECTED]> writes:
 > > 
 > > Paul> Maybe.  Then again, maybe there are real problems here.  The ranlib
 > > Paul> one was already mentioned.  And I wonder if libjava really needs to
 > > Paul> bring the host to its knees, as it does.
 > > 
 > > Killing machines is only a secondary goal, if that's what you mean ;-)
 > > 
 > > The bad news is that libjava is only going to grow.
 > > 
 > > On the other hand, while I haven't measured it myself, I've heard that
 > > a lot of the time in the libjava build is spent in libtool (versus
 > > plain old ld).  Perhaps that can be alleviated somehow.
 > 
 > Has anyone looked at oprofile data for the libjava build?

Again:

CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
  TIMER:0|
  samples|  %|
--
  1770547 63.0596 no-vmlinux
   415708 14.8058 libc-2.3.4.so
   259889  9.2562 ltsh
   257355  9.1659 jc1
22111  0.7875 cc1plus
20260  0.7216 as
19289  0.6870 ld-2.3.4.so
10502  0.3740 make
 5921  0.2109 sed
 5163  0.1839 libbfd-2.15.92.0.2.so
 2855  0.1017 gcj
 2724  0.0970 cc1
 2218  0.0790 libz.so.1.2.1.2
 2154  0.0767 grep
 2019  0.0719 xterm
 1864  0.0664 ld

Andrew.


Successful gcc4.0.0 build (MinGW i386 on WinXP)

2005-04-28 Thread Christian Ehrlicher
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I only build with --enable-languages=c,c++ - I can try a full build if
you want. Also I couldn't run the testsuite because of missing testtools.

Christian

- -
make bootstrap successful build info:

$ gcc-4.0.0/config.guess
i686-pc-mingw32

$ gcc -v
Using built-in specs.
Target: mingw32
Configured with: ../gcc-4.0.0/configure --with-gcc --with-gnu-ld
- --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw
- --enable-threads --disable-nls --enable-languages=c,c++
- --disable-win32-registry --disable-shared --enable-sjlj-exceptions
Thread model: win32
gcc version 4.0.0

$ uname
MINGW32_NT-5.1
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFCcRZrnNKwkgf+zVMRAuIDAJ9hFoCuOYEc0e89n8IyyG5As9klPQCfbiG4
HVHDjG2zlw/mCEeuXfnRPpk=
=e4es
-END PGP SIGNATURE-


Re: different address spaces

2005-04-28 Thread Martin Koegler
On Thu, Apr 28, 2005 at 12:37:48PM -0400, Paul Schlie wrote:
> > Martin Koegler wrote:
> > I have redone the implementation of the eeprom attribute in my prototype.
> > It is now a cleaner solution, but requires larger changes in the core,
> > but the changes in the core should not affect any backend/frontend, if
> > it does not uses them (except a missing case in tree_copy_mem_area, which
> > will cause an assertion to fail).
> > ...
> > +void
> > +tree_copy_mem_area (tree to, tree from)
> > 
> 
> Alternatively might it make sense to utilize the analogy defined in rtl.h?
> 
>   /* Copy the attributes that apply to memory locations from RHS to LHS.  */
>   #define MEM_COPY_ATTRIBUTES(LHS, RHS)\
> (MEM_VOLATILE_P (LHS) = MEM_VOLATILE_P (RHS),\
>  MEM_IN_STRUCT_P (LHS) = MEM_IN_STRUCT_P (RHS),\
>  MEM_SCALAR_P (LHS) = MEM_SCALAR_P (RHS),\
>  MEM_NOTRAP_P (LHS) = MEM_NOTRAP_P (RHS),\
>  MEM_READONLY_P (LHS) = MEM_READONLY_P (RHS),\
>  MEM_KEEP_ALIAS_SET_P (LHS) = MEM_KEEP_ALIAS_SET_P (RHS),\
>  MEM_ATTRS (LHS) = MEM_ATTRS (RHS))
> 
> As unfortunately GCC already inconsistently maintains and copies attributes
> to memory references, it seems that introducing yet another function to do
> so will only likely introduce more inconsistency.
> 
> Therefore wonder if it may be best to simply define MEM_ATTRS as you have
> done, and then consistently utilize MEM_COPY_ATTRIBUTES to properly copy
> attributes associated with memory references when new ones as may need to
> be constructed (as all effective address optimizations should be doing, as
> otherwise the attributes associated with the original reference will be
> lost). I.e.:
> 
> Instead of: (as occasionally incorrectly done)
>  rtx addr1 = copy_to_mode_reg (Pmode, XEXP (operands[1], 0));// some EA
>  emit_move_insn (tmp_reg_rtx, gen_rtx_MEM (QImode, addr1)); // lose attribs
>  emit_move_insn (addr1, gen_rtx_PLUS (Pmode, addr1, const1_rtx)); // new EA
> 
> Something like this is necessary:
> 
>  rtx addr1 = copy_to_mode_reg (Pmode, XEXP (operands[1], 0));// some EA
>  rtx mem_1 = gen_rtx_MEM (QImode, addr1); // gen mem
>  MEM_COPY_ATTRIBUTES (mem_1, operands[1]);// copy attributes
>  emit_move_insn (tmp_reg_rtx, mem_1); // read value
>  emit_move_insn (addr1, gen_rtx_PLUS (Pmode, addr1, const1_rtx)); // new EA
>
If you want to use the memory attributes after all reload and optimization
passes, GCC will need to be extended with the missing set of the memory
attributes. This is not my goal (I try to provide the correct
MEM_REF_FLAGS for the RTL expand pass with all necessary earlier steps
to get the adress spaces information).

Correcting all this, will be a lot of work. We may not forget the machine 
description,
which can also create MEMs.

I have updated my patch.

For the MEM_AREA for the tree, I have eliminated many explicit set operation
of this attribute (build3_COMPONENT_REF and build4_ARRAY_REF completly).

For certain tree codes, the build{1,2,3,4} automatically generate the correct
value of MEM_AREA out of their parameters. Only for INDIRECT_REF, this is
not possible.

If we want get every time the correct attribues, we need to add a source for
the memory attributes to gen_rtx_MEM. For an automtic generation of the
memory attributes, to few information is in the RTL available.

I have added compatibilty checking for memory areas as well as correct handling 
of
them for ?:.

The new version is at http://www.auto.tuwien.ac.at/~mkoegler/gcc/gcc1.patch

mfg Martin Kögler 


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Marcin Dalecki
On 2005-04-28, at 16:26, Lars Segerlund wrote:
 I have never done any 'memory profiling' but I think it might be time 
to give it a
 shot, does anybody have any hints on how to go about something like 
this ?
The main performance problem for GCC as I see it is structural. GCC is 
emulating
the concept of high polymorphism of data structures in plain C. Well 
actually it's
not even plain C any longer... GTY().

Using C++ with simple inheritance could help *a lot* here. (Duck...)
There is too much of pointer indirection in the data structures too.
Look throughout the code and wonder how very few iterations go around 
plan,
simple, fast, spare on register set, allowing cache prefetch, to work C 
arrays.
GCC is using linked lists to test the "efficiency" of TLB and other 
mechanisms
of the CPU.



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Devang Patel
On Apr 28, 2005, at 9:10 AM, Daniel Berlin wrote:
1. make bootstrap on a 2.4ghz p4 takes 90 minutes for me as of
yesterday.
2. Building XLC with (C,C++,Fortran) and a single backend takes  
roughly
the same time as building GCC.  And they aren't three staging, AFAIK.
"..ain't the same ballpark, it ain't the same league,
it ain't even the same..." :-)
-
Devang


Another ms-bitfield question...

2005-04-28 Thread Joern RENNECKE
t002.x has this code:
typedef unsigned short int Tal16ushort __attribute__ ((aligned (16)));
struct S460
{
 unsigned long int __attribute__ ((packed)) a;
Tal16ushort __attribute__ ((aligned)) b:13) - 1) & 15) + 1);
 unsigned short int c;
};
BIGGEST_ALIGNMENT is 64 for sh64-elf.
Does the ((aligned)) attribute apply to b, to the base type of b, or both
to the base type of b and the base type of the current run of bits?
Currently, I see the record is 128-bit aligned, but the run of bits that b
is allocated from is only 64 bit aligned; this doesn't make any sense.


Re: Should there be a GCC 4.0.1 release quickly?

2005-04-28 Thread Steven Bosscher
On Apr 28, 2005 06:55 PM, Joe Buck <[EMAIL PROTECTED]> wrote:

> On Thu, Apr 28, 2005 at 06:45:10PM +0200, Steven Bosscher wrote:
> > Hi,
> > 
> > PR21173 and its duplicates are a class of wrong-code and ICE bugs
> > in GCC 4.0.0.
> > 
> > In Bugzilla, PR21173 now has 3 duplicates, and there was another
> > example on this mailing list. That makes 5 users who have already
> > run into this rather serious bug. That is a lot, for a compiler
> > that has only just been released...
> 
> I'm not surprised to see a problem in anything with two 0's.  It's
> the first release ever with tree-ssa.  It was inevitable that users
> would find problems, 

Yeah, but in this case the patch that introduced the bug was one of
the last to go in before the release (it was the fix for PRs 20490
and 20929, the patch for that went in on April 17).  So it was more
an unfortunate fix than a typical .0 bug.

I hope users don't find too many problems ;-)

Gr.
Steven




Re: Ada test suite

2005-04-28 Thread Janis Johnson
On Thu, Apr 28, 2005 at 01:05:29PM +0200, Laurent GUERBY wrote:
> On Thu, 2005-04-28 at 09:45 +0200, Florian Weimer wrote:
> > Some time ago, someone posted a patch which provided beginnings of a
> > general-purpose Ada test suite infrastructure (in addition to the
> > current ACATS tests, which cannot be used for regression tests).  The
> > patch was not integrated, and I can't find it at the moment. 8-(
> > 
> > Does anybody know which patch I'm talking about?
> 
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18692
> http://gcc.gnu.org/ml/gcc-patches/2004-11/msg01862.html
> 
> Plus this:
> 
> [Ada] Run ACATS tests through an expect script
> http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02484.html
> http://gcc.gnu.org/ml/gcc-patches/2004-12/msg00166.html
> 
> If Arnaud doesn't feel knowledgeable enough to review/approve
> dejagnu code, why don't we name Jim maintainer for this? 
> 
> That would at least avoid to have infrastructure patch stuck
> for five monthes without review :).
> 
> Laurent
> 
> PS: I know nothing about dejagnu either.

I'll look at the DejaGnu aspects of the patch and comment on them, but
someone involved with Ada should maintain it.

Janis


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread David Carlton
On Wed, 27 Apr 2005 16:52:25 -0400, Paul Koning <[EMAIL PROTECTED]> said:

> However, I can always tell when a GCC build has hit the libjava build
> -- that's when the *whole system* suddenly slows to a crawl.  Maybe
> it comes from doing some processing on 5000 foo.o files all at
> once... :-(

It's also too bad the final steps of the libjava build aren't more
parallelizable, though I can't say I have any productive suggestions
to add there.  I just tried a C/C++ bootstrap and a C/C++/Java
bootstrap on a four-processor machine; the latter took 2.6 times as
long as the former, and for most of the non-compilation part of the
libjava build, three of the processors were sitting around being
bored.  (Not that I really know exactly what was causing the delays;
maybe the disk and memory were being stressed enough by the single
processor that was active.)

This also showed up a little in the C build: while make found other
stuff to do while gen-attrtab was going on, shortly after it started
compiling insn-attrtab.c, make ran out of other files to compile.

Not that I'm really complaining: you can get quite a lot of mileage
out of multiple CPUs as it is, more than enough (in my opinion) to
justify purchasing some nice build servers by software shops that do a
lot of GCC work.  (I won't post the actual bootstrap times out of fear
of being lynched.)  This might show up more as people start moving
towards dual-core and/or multiple CPU systems even on the low end.

David Carlton
[EMAIL PROTECTED]


Re: Should there be a GCC 4.0.1 release quickly?

2005-04-28 Thread Daniel Berlin
On Thu, 2005-04-28 at 19:31 +0200, Steven Bosscher wrote:
> On Apr 28, 2005 06:55 PM, Joe Buck <[EMAIL PROTECTED]> wrote:
> 
> > On Thu, Apr 28, 2005 at 06:45:10PM +0200, Steven Bosscher wrote:
> > > Hi,
> > > 
> > > PR21173 and its duplicates are a class of wrong-code and ICE bugs
> > > in GCC 4.0.0.
> > > 
> > > In Bugzilla, PR21173 now has 3 duplicates, and there was another
> > > example on this mailing list. That makes 5 users who have already
> > > run into this rather serious bug. That is a lot, for a compiler
> > > that has only just been released...
> > 
> > I'm not surprised to see a problem in anything with two 0's.  It's
> > the first release ever with tree-ssa.  It was inevitable that users
> > would find problems, 
> 
> Yeah, but in this case the patch that introduced the bug was one of
> the last to go in before the release (it was the fix for PRs 20490
> and 20929, the patch for that went in on April 17).  So it was more
> an unfortunate fix than a typical .0 bug.
Yes, it went in between RC2 and final.

Of course, nobody realized there was a deeper problem at the time
(force_gimple_operand modifying trees) including me.

> 
> I hope users don't find too many problems ;-)
> 
> Gr.
> Steven
> 
> 



Re: Should there be a GCC 4.0.1 release quickly?

2005-04-28 Thread Joe Buck
On Thu, Apr 28, 2005 at 07:31:32PM +0200, Steven Bosscher wrote:
> Yeah, but in this case the patch that introduced the bug was one of
> the last to go in before the release (it was the fix for PRs 20490
> and 20929, the patch for that went in on April 17).  So it was more
> an unfortunate fix than a typical .0 bug.

Again, that's not surprising.  There's always a risk that a patch can
create a new bug, and the regression test suite only proves we haven't
re-broken old bugs.  Late fixes are particularly risky, because of the
limited testing they get.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Daniel Berlin
On Thu, 2005-04-28 at 10:23 -0700, Devang Patel wrote:
> On Apr 28, 2005, at 9:10 AM, Daniel Berlin wrote:
> 
> > 1. make bootstrap on a 2.4ghz p4 takes 90 minutes for me as of
> > yesterday.
> > 2. Building XLC with (C,C++,Fortran) and a single backend takes  
> > roughly
> > the same time as building GCC.  And they aren't three staging, AFAIK.
> 
> "..ain't the same ballpark, it ain't the same league,
> it ain't even the same..." :-)

Bullshit.
They are both production quality compilers.

People complain gcc bootstrap is too long. XLC at O2 (which is what they
compile at, last i looked) isn't running the IPA middle end, etc.

--Dan



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Peter Barada

>Not that I'm really complaining: you can get quite a lot of mileage
>out of multiple CPUs as it is, more than enough (in my opinion) to
>justify purchasing some nice build servers by software shops that do a
>lot of GCC work.  (I won't post the actual bootstrap times out of fear
>of being lynched.)  This might show up more as people start moving
>towards dual-core and/or multiple CPU systems even on the low end.



That's great if the software can be cross-built.  As it is,
cross-building a toolchain requires a lot of extra work, and if it
weren't for Dan Kegel's commitment, I'd dare say near impossible.
I've watched the sometimes near-indifference to the problems we have
trying to put together toolchains for non-hosted environments.

Even when I have a cross-toolchain, its still a *long* uphill battle
since there are too many OSS packages out there that can't
cross-configure/compile (openssh, perl as examples off the top of my
head) without a *lot* of work.

Its just that it takes a lot of time and work to cross-build a non-x86
linux environment to verify any changes in the toolchain.

And comments like "get a faster machine" are a non-starter.



-- 
Peter Barada
[EMAIL PROTECTED]


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Matt Thomas

Someone complained I was unfair in my gcc bootstrap times since
some builds included libjava/gfortran and some did not.

So in the past day, I've done bootstrap with just c,c++,objc on
both 3.4 and gcc4.1.  I've put the results in a web page at
http://3am-software.com/gcc-speed.html.  The initial bootstrap
compiler was gcc3.3 and they are all running off the same base
of NetBSD 3.99.3.

While taking out fortran and java reduced the disparity, there
is still a large increase in bootstrap times from 3.4 to 4.1.
-- 
Matt Thomas email: [EMAIL PROTECTED]
3am Software Foundry  www: http://3am-software.com/bio/matt/
Cupertino, CA  disclaimer: I avow all knowledge of this message.


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Devang Patel
On Apr 28, 2005, at 10:54 AM, Daniel Berlin wrote:
On Thu, 2005-04-28 at 10:23 -0700, Devang Patel wrote:
On Apr 28, 2005, at 9:10 AM, Daniel Berlin wrote:
1. make bootstrap on a 2.4ghz p4 takes 90 minutes for me as of
yesterday.
2. Building XLC with (C,C++,Fortran) and a single backend takes
roughly
the same time as building GCC.  And they aren't three staging,  
AFAIK.
"..ain't the same ballpark, it ain't the same league,
it ain't even the same..." :-)
Bullshit.
They are both production quality compilers.
People complain gcc bootstrap is too long. XLC at O2 (which is what  
they
compile at, last i looked) isn't running the IPA middle end, etc.
Again,
"..ain't the same ballpark, it ain't the same league,
it ain't even the same *** sport..."
Software A provides X1 functionality.
Software B provides X2 functionality.
BTW, you know more, about XLC's SPEC numbers and GCC's SPEC
numbers, than me. So in these X1 is not same as X2 but let's
ignore that for moment and say X1 is almost same as X2.
Software A and B is developed independently.
They do not share source code.
Software G compiles A in T1 time.
Software X compiles B in T2 time.
T1 is almost same as T2, so G is as fast as X.
That's what you're trying to say?
-
Devang


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Joe Buck
On Thu, Apr 28, 2005 at 11:03:51AM -0700, Matt Thomas wrote:
> 
> Someone complained I was unfair in my gcc bootstrap times since
> some builds included libjava/gfortran and some did not.
> 
> So in the past day, I've done bootstrap with just c,c++,objc on
> both 3.4 and gcc4.1.  I've put the results in a web page at
> http://3am-software.com/gcc-speed.html.  The initial bootstrap
> compiler was gcc3.3 and they are all running off the same base
> of NetBSD 3.99.3.
> 
> While taking out fortran and java reduced the disparity, there
> is still a large increase in bootstrap times from 3.4 to 4.1.

There's some new code in libstdc++ now (the TR1 stuff) that (last time I
looked) takes a long time to build, and wasn't in 3.4.  Could that be a
factor?

A comparison with just --enable-languages=c would eliminate any
issues with libraries.



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Ian Lance Taylor
Matt Thomas <[EMAIL PROTECTED]> writes:

> When I see the native stage2 m68k compiler spend 30+ minutes compute bound
> with no paging activity compiling a single source file, I believe
> that is an accurate term.  Compiling stage3 on a 50MHz 68060 took 18 hours.
> (That 30 minutes was for fold-const.c if you care to know).

The fold-const.c compilation seems like a good specific candidate for
a bug report at
http://gcc.gnu.org/bugzilla/

If you can include the preprocessed file and profiler output from cc1
running on your system, there is a chance that this can be addressed.

In general the gcc developers do a reasonable job of keeping the
compiler from being too slow, but they do not use m68k systems, and
they can only work on problems which people bring to their attention.

And, yes, we clearly need to do something about the libjava build.

Thanks.

Ian


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Daniel Berlin
On Thu, 2005-04-28 at 11:08 -0700, Devang Patel wrote:
> On Apr 28, 2005, at 10:54 AM, Daniel Berlin wrote:
> 
> > On Thu, 2005-04-28 at 10:23 -0700, Devang Patel wrote:
> >> On Apr 28, 2005, at 9:10 AM, Daniel Berlin wrote:
> >>
> >>> 1. make bootstrap on a 2.4ghz p4 takes 90 minutes for me as of
> >>> yesterday.
> >>> 2. Building XLC with (C,C++,Fortran) and a single backend takes
> >>> roughly
> >>> the same time as building GCC.  And they aren't three staging,  
> >>> AFAIK.
> >>
> >> "..ain't the same ballpark, it ain't the same league,
> >> it ain't even the same..." :-)
> >
> > Bullshit.
> > They are both production quality compilers.
> >
> > People complain gcc bootstrap is too long. XLC at O2 (which is what  
> > they
> > compile at, last i looked) isn't running the IPA middle end, etc.
> 
> Again,
> 
> "..ain't the same ballpark, it ain't the same league,
> it ain't even the same *** sport..."
> 
> Software A provides X1 functionality.
> Software B provides X2 functionality.
> 

This has nothing to do with it.
We are talking about bootstrap times of compilers with roughly
equivalent functionality.

They both compile C, C++, and fortran.

You can assume our backends and frontend are comparable in terms of
functionality (they actually are, though not equivalent in performance
generated by that functionality, but in time taken by each functional
part they are).

As i've said, we've already excluded the running time for the
functionality we don't have (interprocedural middle end).

--Dan



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread David Edelsohn
> Matt Thomas writes:

Matt> So in the past day, I've done bootstrap with just c,c++,objc on
Matt> both 3.4 and gcc4.1.
Matt> While taking out fortran and java reduced the disparity, there
Matt> is still a large increase in bootstrap times from 3.4 to 4.1.

libstdc++ contains a lot more features in GCC 4.1, especially
TR1.

We understand that a lot of people use modern GCC on older
hardware and the compilation speed can be frustrating.  GCC supports a
very diverse group of processors with greatly varying performance.  Users
also want new features and want to utilize newer hardware performance for
more optimization in the same wall clock time.  It is difficult, if not
impossible, to satisfy everyone.

The GCC developers are trying to improve compile time, but there
are no magic bullets.  If you provide testcases, developers do investigate
ways to improve compile time when they have examples to test.

It would be helpful if this discussion thread distinguished
between compile time and bootstrap time.  The two are related but not
identical.  While GCC compile time needs to improve, the amount of time
that it takes to build GCC itself, relative to other commercial compilers
with similar features and runtimes, is similar.

David



successful build of GCC 4.0.0 on Mac OS 10.3.9 (bootstrap, Fortran95)

2005-04-28 Thread Bojan Antonovic
Note:
- I builded GMP 4.1.4 with MPFR 4.1 myself.
- I switched to GNU make and actualized some other tools as avalable 
(http://gcc.gnu.org/install/prerequisites.html).
- Building fails if standard tools from Mac OS 10.3.9 are used! The 
prerequisits changed!
- Other languages will come later.

/usr/local/bin/gcc-4.0.0f -v
Using built-in specs.
Target: powerpc-apple-darwin7.9.0
Configured with: ../gcc-4.0.0/configure --program-suffix=-4.0.0f 
--enable-languages=f95
Thread model: posix
gcc version 4.0.0

./config.guess
powerpc-apple-darwin7.9.0
uname -a
Darwin Bojan-Antonovics-Computer.local 7.9.0 Darwin Kernel Version 7.9.0: Wed 
Mar 30 20:11:17 PST 2005; root:xnu/xnu-517.12.7.obj~1/RELEASE_PPC  Power 
Macintosh powerpc
gcc -v
Reading specs from /usr/local/lib/gcc/powerpc-apple-darwin7.6.0/3.4.3/specs
Configured with: ../gcc-3.4.3/configure --program-suffix=-3.4.3 
--enable-languages=c,c++,objc,java,f77
Thread model: posix
gcc version 3.4.3
greetings
   Bojan


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Hugh Sasse
On Thu, 28 Apr 2005, Ian Lance Taylor wrote:
If you can include the preprocessed file and profiler output from cc1
running on your system, there is a chance that this can be addressed.
GCC comes with a test suite and a means for submitting results.  May
I suggest that it might be useful to have a timing suite built
similarly, so people can do this kind of thing really easily?  Maybe
someone can tell us if it is too big a job to create such a tool
portably?  I think having it running during stage3 would yield
useful results for comparisons.  A tool that came with GCC would
ensure the data was in a useful format, and that tests were
performed consistently.
Hugh


Re: Ada test suite

2005-04-28 Thread Laurent GUERBY
On Thu, 2005-04-28 at 10:39 -0700, Janis Johnson wrote:
> I'll look at the DejaGnu aspects of the patch and comment on them, but
> someone involved with Ada should maintain it.

Sounds fair, but then don't hesitate to add comments in the patch
so dejagnu illiterates don't feel lost :).

Thanks for looking into this!

Laurent



gcc@gcc.gnu.org

2005-04-28 Thread Richard Guenther
Joseph S. Myers wrote:
> On Thu, 28 Apr 2005, Richard Guenther wrote:
> 
> 
>>The patch was bootstrapped and tested on i686-pc-linux-gnu for
>>the C language with the only remaining regression being c99-init-4.c
>>(I didn't manage to find the place to fix).
> 
> 
> You don't say how it regresses.  What diagnostic is it generating, what 
> code is generating it (for diagnostics GCC can generate in more than one 
> place), what are the relevant trees or other variables that caused the 
> diagnostic to be reached and what were they without the patch to cause it 
> not to be reached?

Yeah, sorry.  The failure is

FAIL: gcc.dg/c99-init-4.c (test for excess errors)
Excess errors:
/net/alwazn/home/rguenth/src/gcc/cvs/gcc-4.1/gcc/testsuite/gcc.dg/c99-init-4.c:8:
error: initializer element is not constant
/net/alwazn/home/rguenth/src/gcc/cvs/gcc-4.1/gcc/testsuite/gcc.dg/c99-init-4.c:8:
error: (near initialization for 'a[0]')

I guess the checking code is maybe confused by seeing &a[0] instead of
&a, like in the places I fixed elsewhere.  But I wasn't able to follow
with the debugger, and we don't have a tree-dump available at this
state.  I didn't compare with an unpatched run, also the point that
fails (c-typeck.c:5755) seems to be reached or not reached with the
same tree (though than can't be).  Well, I got bored staring at gdb
and didn't debug this further.  I'll try again, if the change is
considered a good idea.  But note that I guess there will be fallout
that is not excercised by the testsuite maybe.

Richard.


std::string support UTF8?

2005-04-28 Thread Laurielle Lea
Hello,
I would like just to know if string class of libstdc++ support UTF8 and 
if not, is it possible to convert string to utf8 ?

Thanks a lot.
Regards,
Laurielle LEA
--
Laurielle LEA
Savoir-faire Linux inc.
http://www.savoirfairelinux.com


Re: std::string support UTF8?

2005-04-28 Thread Andrew Pinski
On Apr 28, 2005, at 3:08 PM, Laurielle Lea wrote:
Hello,
I would like just to know if string class of libstdc++ support UTF8 
and if not, is it possible to convert string to utf8 ?
wstring supports wide strings via wchar_t.  string supports just 8bit 
chars so you figure
it out.

This question is not appropriate for this list, next time use gcc-help
or ask your question on a C++ news group like comp.lang.c++.
-- Pinski


Re: std::string support UTF8?

2005-04-28 Thread Zack Weinberg
Andrew Pinski <[EMAIL PROTECTED]> writes:

> On Apr 28, 2005, at 3:08 PM, Laurielle Lea wrote:
>
>> Hello,
>>
>> I would like just to know if string class of libstdc++ support UTF8
>> and if not, is it possible to convert string to utf8 ?
>
> wstring supports wide strings via wchar_t.  string supports just 8bit
> chars so you figure
> it out.
>
> This question is not appropriate for this list, next time use gcc-help
> or ask your question on a C++ news group like comp.lang.c++.

This is the sort of unnecessarily rude response to a question, that I
was just saying we should avoid.  It is fine to redirect these
questions to a more appropriate forum, but please be polite about it.

zw


Re: different address spaces

2005-04-28 Thread Paul Schlie
> From: Martin Koegler <[EMAIL PROTECTED]>
>> On Thu, Apr 28, 2005 at 12:37:48PM -0400, Paul Schlie wrote:
>>> Martin Koegler wrote:
>>> I have redone the implementation of the eeprom attribute in my prototype.
>>> It is now a cleaner solution, but requires larger changes in the core,
>>> but the changes in the core should not affect any backend/frontend, if
>>> it does not uses them (except a missing case in tree_copy_mem_area, which
>>> will cause an assertion to fail).
>>> ...
>>> +void
>>> +tree_copy_mem_area (tree to, tree from)
>>> 
>> 
>> Alternatively might it make sense to utilize the analogy defined in rtl.h?
>> 
>>   /* Copy the attributes that apply to memory locations from RHS to LHS.  */
>>   #define MEM_COPY_ATTRIBUTES(LHS, RHS)\
>> (MEM_VOLATILE_P (LHS) = MEM_VOLATILE_P (RHS),\
>>  MEM_IN_STRUCT_P (LHS) = MEM_IN_STRUCT_P (RHS),\
>>  MEM_SCALAR_P (LHS) = MEM_SCALAR_P (RHS),\
>>  MEM_NOTRAP_P (LHS) = MEM_NOTRAP_P (RHS),\
>>  MEM_READONLY_P (LHS) = MEM_READONLY_P (RHS),\
>>  MEM_KEEP_ALIAS_SET_P (LHS) = MEM_KEEP_ALIAS_SET_P (RHS),\
>>  MEM_ATTRS (LHS) = MEM_ATTRS (RHS))
>> 
>> As unfortunately GCC already inconsistently maintains and copies attributes
>> to memory references, it seems that introducing yet another function to do
>> so will only likely introduce more inconsistency.
>> 
>> Therefore wonder if it may be best to simply define MEM_ATTRS as you have
>> done, and then consistently utilize MEM_COPY_ATTRIBUTES to properly copy
>> attributes associated with memory references when new ones as may need to
>> be constructed (as all effective address optimizations should be doing, as
>> otherwise the attributes associated with the original reference will be
>> lost). I.e.:
>> ...
> If you want to use the memory attributes after all reload and optimization
> passes, GCC will need to be extended with the missing set of the memory
> attributes. This is not my goal (I try to provide the correct
> MEM_REF_FLAGS for the RTL expand pass with all necessary earlier steps
> to get the adress spaces information).
> 
> Correcting all this, will be a lot of work. We may not forget the machine
> description, which can also create MEMs.

- I persume that machine descriptions which already don't maintain mem
  attributes simply don't need them, as they most likely presume every
  pointer simply references the same memory space. (hence don't care).

> I have updated my patch.
> 
> For the MEM_AREA for the tree, I have eliminated many explicit set operation
> of this attribute (build3_COMPONENT_REF and build4_ARRAY_REF completly).
> 
> For certain tree codes, the build{1,2,3,4} automatically generate the correct
> value of MEM_AREA out of their parameters. Only for INDIRECT_REF, this is
> not possible.

- if the original mem ref attributes derived from their originally enclosed
  symbol were maintained, any arbitrary type of memory reference would work.

> If we want get every time the correct attribues, we need to add a source for
> the memory attributes to gen_rtx_MEM. For an automtic generation of the
> memory attributes, to few information is in the RTL available.

- that would seem like a convenient way to to it as opposed to folks having
  to remember to do it properly.

> I have added compatibilty checking for memory areas as well as correct
> handling of them for ?:.
> 
> The new version is at http://www.auto.tuwien.ac.at/~mkoegler/gcc/gcc1.patch

- thanks.




Re: different address spaces

2005-04-28 Thread Martin Koegler
On Thu, Apr 28, 2005 at 03:43:22PM -0400, Paul Schlie wrote:
> > For the MEM_AREA for the tree, I have eliminated many explicit set operation
> > of this attribute (build3_COMPONENT_REF and build4_ARRAY_REF completly).
> > 
> > For certain tree codes, the build{1,2,3,4} automatically generate the 
> > correct
> > value of MEM_AREA out of their parameters. Only for INDIRECT_REF, this is
> > not possible.
> 
> - if the original mem ref attributes derived from their originally enclosed
>   symbol were maintained, any arbitrary type of memory reference would work.

Can an optimizer theoretically not change fundamentally the structure of a 
memory
reference, so that the attributes will not be valid any more?

For adress spaces, the biggest problem can be, if access operations to different
address spaces are joined:

eg:
if(???)
{
REG_1=...;
REG_1+=4;
do((MEM REG_1));
}
else
{
REG_2=...;
REG_2+=4;
do((MEM REG_2));
}
where REG_1 and REG_2 are pointer to different address spaces.

to:
if()
REG_1=...;
else
REG_1=...;
REG_1+=4;
do((MEM REG_1));

to eg. save space.

Even at tree level, such an change could be done by an optimizer,
if he does not take care of the address spaces.

For RTL level, a problem could be, if some information about
eg how the data is packed, would be stored in the memory attributes.

If an optimizer decides, that not the original pointer value is important, but
pointer to an address inside the data would be more useful, then simply
copying the attributes may give a wrong view about the new MEM.

Because of my experiments with GCC, I conclude, that if we want any kind of
attributes (either in tree or RTL), everything, which deal with it, need
to know about all needed side effects, which can be a problem for
backend specific attributes.

Introducing support for named address spaces in GCC would not be a big problem,
it should be no change for non aware backends as frontends. It could be written
in such way, that an bug in this code cause no regression for not address space 
using targets.

The big amount of work will be, to verify that no optimizer will introduce
wrong optimizations.

For my patch, I am still not sure, if I even handle all the MEM_AREA correct 
in all sitations or if I need to add the MEM_AREA to other expression too.

mfg Martin Kögler


Re: Should there be a GCC 4.0.1 release quickly?

2005-04-28 Thread Mark Mitchell
Joe Buck wrote:
On Thu, Apr 28, 2005 at 07:31:32PM +0200, Steven Bosscher wrote:
Yeah, but in this case the patch that introduced the bug was one of
the last to go in before the release (it was the fix for PRs 20490
and 20929, the patch for that went in on April 17).  So it was more
an unfortunate fix than a typical .0 bug.

Again, that's not surprising.  There's always a risk that a patch can
create a new bug, and the regression test suite only proves we haven't
re-broken old bugs.  Late fixes are particularly risky, because of the
limited testing they get.
Exactly.  The paradox here is that I get a ton of mail right before a 
release with people who want to incude additional patches (which are 
always safe, obviously correct, and fix critical problems), and then I 
often get mail after the release from people who want to do another 
release quickly to fix critical problems (by applying safe, obviously 
correct patches) introduced by the previously applied patches. :-)

I'm not at all surprised that GCC 4.0.0 has bugs, even relatively 
serious ones.  It's a .0 releases with an entirely new optimizer.  If we 
didn't get any bug reports about serious problems I'd just conclude that 
nobody was using it.  I certainly would have been amazed to find 
distributors shipping unpatched 4.0.0 to customers; I know CodeSourcery 
would not!

I'd rather not rush to a 4.0.1 release.  I'm happy to consider moving it 
up a bit, but I'd like to find out what the top five or ten critical 
problems are and try to fix all five or ten, rather than rush out 
another release which fixes one or two of those problems, but still 
leaves people with trouble on the remainder.

So, how about we hang in there another week or so, and see what things 
look like?  In the meantime, I'm trying to plan 3.4.4

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Daniel Berlin
On Thu, 2005-04-28 at 11:03 -0700, Matt Thomas wrote:
> Someone complained I was unfair in my gcc bootstrap times since
> some builds included libjava/gfortran and some did not.
> 
> So in the past day, I've done bootstrap with just c,c++,objc on
> both 3.4 and gcc4.1.  I've put the results in a web page at
> http://3am-software.com/gcc-speed.html.  The initial bootstrap
> compiler was gcc3.3 and they are all running off the same base
> of NetBSD 3.99.3.
> 
> While taking out fortran and java reduced the disparity, there
> is still a large increase in bootstrap times from 3.4 to 4.1.

I'm sure you used disable-checking on the 4.1 test, right?
Since otherwise, it's a worthless comparison



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Daniel Berlin

On Thu, 28 Apr 2005, Daniel Berlin wrote:
On Thu, 2005-04-28 at 11:03 -0700, Matt Thomas wrote:
Someone complained I was unfair in my gcc bootstrap times since
some builds included libjava/gfortran and some did not.
So in the past day, I've done bootstrap with just c,c++,objc on
both 3.4 and gcc4.1.  I've put the results in a web page at
http://3am-software.com/gcc-speed.html.  The initial bootstrap
compiler was gcc3.3 and they are all running off the same base
of NetBSD 3.99.3.
While taking out fortran and java reduced the disparity, there
is still a large increase in bootstrap times from 3.4 to 4.1.
I'm sure you used disable-checking on the 4.1 test, right?
Since otherwise, it's a worthless comparison
Forget it, i missed it in the light green on black text
:)



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Dan Kegel
Peter Barada wrote:
The alternative of course is to do only crossbuilds.  Is it reasonable
to say that, for platforms where a bootstrap is no longer feasible, a
successful crossbuild is an acceptable test procedure to use instead?

A successful crossbuild is certainly the minimum concievable standard.
Perhaps one should also require bootstrapping the C compiler alone;
that would provide at least some sanity-checking.

Unfortunately for some of the embedded targets(like the ColdFire V4e
work I'm doing), a bootstrap is impossible due to limited memory and
no usable mass-storage device on the hardware I have available, so
hopefully a successful crossbuild will suffice.
How about a successful crossbuild plus
passing some regression test suite,
e.g. gcc's, glibc's, and/or ltp's?
Any one of them would provide a nice reality check.
- Dan
--
Trying to get a job as a c++ developer?  See 
http://kegel.com/academy/getting-hired.html


Re: different address spaces

2005-04-28 Thread Paul Schlie
> From: Martin Koegler <[EMAIL PROTECTED]>
>> On Thu, Apr 28, 2005 at 03:43:22PM -0400, Paul Schlie wrote:
>>> For the MEM_AREA for the tree, I have eliminated many explicit set operation
>>> of this attribute (build3_COMPONENT_REF and build4_ARRAY_REF completly).
>>> 
>>> For certain tree codes, the build{1,2,3,4} automatically generate the
>>> correct
>>> value of MEM_AREA out of their parameters. Only for INDIRECT_REF, this is
>>> not possible.
>> 
>> - if the original mem ref attributes derived from their originally enclosed
>>   symbol were maintained, any arbitrary type of memory reference would work.
> 
> Can an optimizer theoretically not change fundamentally the structure of a
> memory reference, so that the attributes will not be valid any more?

- I don't see how that could be? As I would expect for example:

  static const s[] = "abc"; // to declare a READONLY array of char (s.x)

  (volatile)x = (volatile char*)(&s[0] + 1); //
  (volatile)x = (volatile char*)(&s[0] + 1); // to generate something like:

   (set (mem (symb x)) (mem (plus (plus (symb s.x) (const 0)) (const 1))
   (set (mem (symb x)) (mem (plus (plus (symb s.x) (const 0)) (const 1))

  Which may be "optimized" by folding the constants representing the
  memory reference's effective address, thereby potentially being able
  to pre-calculate the references effective address:

   (set temp-ptr (plus (symbol s.x) 1))
   (set (mem (symb x)) (mem temp-ptr)) ; with either original mem ref used,
   (set (mem (symb x)) (mem temp-ptr)) ; or new one with attributes copied.

  Thereby although the effective address calculation itself may be
  "optimized", the fundamental attributes associated with the declared
  object must be maintained. Thereby in effect all effective address
  optimizations logically occur within the attributed scope of the original
  memory references context, i.e.:

  (mem (some-arbitrary-effective-address-expression))
  ->
  (mem (some-optimized-effective-address-expression))

> For adress spaces, the biggest problem can be, if access operations to
> different address spaces are joined:

- there's no such thing:

  ptr-diff = (mem ptr) +/- (mem ptr)
  (mem ptr) = (mem ptr) +/- ptr-diff

  That's all that's possible (all other interpretations are erroneous)

> eg:
> if(???)
> {
> REG_1=...;
> REG_1+=4;
> do((MEM REG_1));
> }
> else
> {
> REG_2=...;
> REG_2+=4;
> do((MEM REG_2));
> }
> where REG_1 and REG_2 are pointer to different address spaces.
> 
> to:
> if()
> REG_1=...;
> else
> REG_1=...;
> REG_1+=4;
> do((MEM REG_1));
> 
> to eg. save space.

- which is erroneous unless both reg_1 and reg_2 are pointers to objects
  with identical mem ref attributes (which is also why mem attributes should
  be maintained correctly and consistently at the tree level throughout all
  phases of optimization, as GCC presently has no ability to differentiate
  between types of objects referenced through the use of typed pointers).

> Even at tree level, such an change could be done by an optimizer,
> if he does not take care of the address spaces.
> 
> For RTL level, a problem could be, if some information about
> eg how the data is packed, would be stored in the memory attributes.
> 
> If an optimizer decides, that not the original pointer value is important,
> but pointer to an address inside the data would be more useful, then simply
> copying the attributes may give a wrong view about the new MEM.
> 
> Because of my experiments with GCC, I conclude, that if we want any kind of
> attributes (either in tree or RTL), everything, which deal with it, need
> to know about all needed side effects, which can be a problem for
> backend specific attributes.
> 
> Introducing support for named address spaces in GCC would not be a big
> problem, it should be no change for non aware backends as frontends. It
> could be written in such way, that an bug in this code cause no regression
> for not address space using targets.
> 
> The big amount of work will be, to verify that no optimizer will introduce
> wrong optimizations.
> 
> For my patch, I am still not sure, if I even handle all the MEM_AREA correct
> in all sitations or if I need to add the MEM_AREA to other expression too.

- unfortunately I don't believe there's any option, as GCC is already
  suffering from corners being cut by not properly differentiating between
  addresses and offsets, which only further confuses things when attempting
  to identify the type of object an arbitrary resulting pointer actually
  references; however it seems that restricting effective address
  optimizations to occur only within the context of their original mem ref
  representation is a great start, and will likely quickly shake out any
  miss-optimizations that may remain. (as otherwise, GCC simply won't be
  able reliably identify the memory attributes associated with an arbitrary
  effective address; as may be necessary to also be visible to the back-end)




Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Dan Nicolaescu
Dan Nicolaescu <[EMAIL PROTECTED]> writes:

  > Matt Thomas <[EMAIL PROTECTED]> writes:
  > 
  >   > Richard Henderson wrote:
  >   > > On Tue, Apr 26, 2005 at 10:57:07PM -0400, Daniel Jacobowitz wrote:
  >   > > 
  >   > >>I would expect it to be drastically faster.  However this won't show 
up
  >   > >>clearly in the bootstrap.  The, bar none, longest bit of the bootstrap
  >   > >>is building stage2; and stage1 is always built with optimization off 
and
  >   > >>(IIRC) checking on.
  >   > > 
  >   > > 
  >   > > Which is why I essentially always supply STAGE1_CFLAGS='-O -g' when
  >   > > building on risc machines.
  >   > 
  >   > Alas, the --disable-checking and STAGE1_CFLAGS="-O2 -g" (which I was
  > 
  > I don't think that is enough,  also edit gcc/Makefile.in and change the 
line:
  > STAGE1_CHECKING = -DENABLE_CHECKING -DENABLE_ASSERT_CHECKING
  > to be
  > STAGE1_CHECKING =   
  > 
  > Is there a better way to do this? STAGE1_CHECKING is not passed from
  > the toplevel make, so one cannot put it on the make bootstrap command
  > line...

Following up on this to show some numbers. 
On a 2.8GHz P4, 1MB cache, 512MB RAM, Fedora Core 3
gcc HEAD configured with --enable-languages=c --disable-checking --disable-nls 

time make bootstrap > & out
1227.861u 53.623s 21:26.53 99.6%0+0k 0+0io 18pf+0w

If gcc/Makefile.in is first edited as shown above, then the bootstrap
time is: 
983.769u 53.241s 17:33.12 98.4% 0+0k 0+0io 3573pf+0w

A significant difference!

Given that with --enable-languages=c the amount of stuff built with
the slow stage1 compiler is minimal, the impact might be even higher
when more languages are enabled (I haven't tried). 

It would be nice to have some way to disable STAGE1_CHECKING other
than by editing gcc/Makefile.in (Sorry, I can't help with this, I
don't know much about how the gcc configuration process). 

  


Re: FW: GCC Cross Compiler for cygwin

2005-04-28 Thread James E Wilson
Amir Fuhrmann wrote:
checking whether byte ordering is bigendian... cross-compiling...
unknown
checking to probe for byte ordering... /usr/local/powerpc-eabi/bin/ld: warning: 
cannot find entry symbol _start; defaulting to 01800074
Looking at libiberty configure, I see it first tries to get the 
byte-endian info from sys/params.h, then it tries a link test.  The link 
test won't work for a cross compiler here, so you have to have 
sys/params.h before building, which means you need a usable C library 
before starting the target library builds.  But you need a compiler 
before you can build newlib.

You could try doing the build in stages, e.g. build gcc only without the 
target libraries, then build newlib, then build the target libraries. 
Dan Kegel's crosstool scripts do something like this with glibc for 
linux targets.

Or you can do a combined tree build which will work for embedded targets 
and is simpler, i.e. put newlib inside the gcc source tree, as a sister 
directory to libiberty and libstdc++.  This is what most of us 
developers do.  In this case, newlib will be auto-detected and built 
after gcc and before libstdc++, and used when configuring the target 
libiberty.

You can also put other stuff in the combined tree, like binutils and 
gdb, but if you aren't using the head of the CVS trees, you may need to 
resolve conflicts.  See for instance
http://gcc.gnu.org/simtest-howto.html
--
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com


'make bootstrap' oprofile (13% on bash?)

2005-04-28 Thread Scott A Crosby
On Thu, 28 Apr 2005 10:29:32 +0100, Andrew Haley <[EMAIL PROTECTED]> writes:

>  > -- and it wouldn't surprise me if the libjava build procedure were a
>  > major contributor there.
>
> Yes.  This is a profile of the libgcj build.  The single biggest user
> of CPU is the libtool shell script, which uses more than a quarter of
> the total (non-kernel) CPU time.  However, it's important not to be
> misled -- I'm sure the linker causes a huge amount of disk activity,
> so it's not just CPU time that is important.
>
> Having said that, I suspect that the single biggest improvement to the
> libgcj build time would be either to remove the libtool shell script
> altogether or find some way to reduce its use or make it faster.
>

bash is the third-most prominent program in the result of a full 'make
bootstrap' oprofile on gcc-4.1-20050424 configured with
--disable-checking. A few other breakdowns, including for jc1 and cc1
exist on http://www.cs.rice.edu/~scrosby/tmp/GCC/

This was done on machine with 1.5gb of RAM. As the build+src
directories add to 1.1gb, there shouldn't have been any disk IO.

(Excerpted out of GCC-overall)

CPU: Athlon, speed 1830.27 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask 
of 0x00 (No unit mask) count 180
CPU_CLK_UNHALT...|
  samples|  %|
--
  1571655 35.9269 /tmp/gccbuild/gcc/cc1
  1108036 25.3289 /tmp/gccbuild/gcc/jc1
   573302 13.1053 /bin/bash
   178865  4.0887 /tmp/gccbuild/gcc/cc1plus
   150641  3.4435 /bin/sed
   111300  2.5442 /usr/lib/gcc-lib/i486-linux/3.3.5/cc1
   110216  2.5424 /usr/bin/as
60841  1.4034 /tmp/gccbuild/gcc/build/genattrtab
44633  1.0296 /usr/local/src/linux-2.6.10/vmlinux
38924  0.8979 /usr/bin/ld
23191  0.5350 /usr/bin/expr

Scott


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Peter Barada

>> Unfortunately for some of the embedded targets(like the ColdFire V4e
>> work I'm doing), a bootstrap is impossible due to limited memory and
>> no usable mass-storage device on the hardware I have available, so
>> hopefully a successful crossbuild will suffice.
>
>How about a successful crossbuild plus
>passing some regression test suite,
>e.g. gcc's, glibc's, and/or ltp's?
>Any one of them would provide a nice reality check.

I'm open to running them if there's a *really* clear how-to to do it
that takes into account remote hardware.

-- 
Peter Barada
[EMAIL PROTECTED]


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Jason Thorpe
On Apr 27, 2005, at 12:57 PM, Steven Bosscher wrote:
Maybe the older platform should stick to the older compiler then,
if it is too slow to support the kind of compiler that modern
systems need.
This is an unreasonable request.  Consider NetBSD, which runs on new  
and old hardware.  The OS continues to evolve, and that often  
requires adopting newer compilers (so e.g. other language features  
can be used in the base OS).

The GCC performance issue is not new.  It seems to come up every so  
often... last time I recall a discussion on the topic, it was thought  
that the new memory allocator (needed for pch) was cause cache-thrash  
(what was the resolution of that discussion, anyway?)

-- thorpej


Re: GCC 4.1: Buildable on GHz machines only?

2005-04-28 Thread Jason Thorpe
On Apr 27, 2005, at 7:41 AM, David Edelsohn wrote:
GCC now supports C++, Fortran 90 and Java.  Those languages have
extensive, complicated runtimes.  The GCC Java environment is becoming
much more complete and standards compliant, which means adding more  
and
more features.
Except it's not just bootstrapping GCC.  It's everything.  When the  
NetBSD Project switched from 2.95.3 to 3.3, we had a noticeably  
increase in time to do the "daily" builds because the 3.3 compiler  
was so much slower at compiling the same OS source code.  And we're  
talking almost entirely C code, here.

-- thorpej