Re: Copies of the GCC repository

2005-11-23 Thread Arnaud Charlet
> Most svn side operations create subpools for loops that may allocate
> per-iteration memory due to calls to other functions, etc, and clear it
> each iteration to avoid such per iteration allocations become too large.
> Some don't.

FWIW, the first (and only) time I tried to do a svn diff on lots of
files (about 700), svn diff never returned: my machine started to swap
and be unresponsive like hell, and I had a very hard time hitting ^C
to stop it.

This was svn 1.2.3, dunno if svn 1.3.0 is better in this area, but it would
certainly be good to fix this behavior.

Currently, I have to do a 'for' loop in sh to work around this trivial
issue, which is certainly annoying, since I do use svn diff a lot.

Arno


Successfull build & install of GCC 4.0.2

2005-11-23 Thread Laban, Marinko
Dear sirs,

As per your request in gcc-4.0.2/INSTALL/finalinstall.html, I herewith send
you the following information on my build & installation of GCC:

- config.guess states "i686-pc-linux-gnu"

- gcc -v output:
  Using built-in specs.
  Target: i686-pc-linux-gnu
  Configured with ../gcc-4.0.2/configure --prefix=/usr
--enable-languages=ada,c,c++
  Thread model: posix
  gcc version 4.0.2

- Linux version: Mandriva 2006.0

- glibc version: glibc-2.3.5-5mdk

I plan to upgrade to GNAT 4.1 and include Fortran and Java, but I started
with the most important languages for me (Ada95 and C).

Best regards,
Marinko Laban



Anh vao dia chi nay nhe

2005-11-23 Thread hoanglanhuong1980
Em tim duoc dia chi cuu data cho anh roi, anh goi so may nay nhe: 04-
9875709, o do ho chuyen sua chua may vi tinh va chuyen cuu du lieu day. Day 
la dia chi website cua ho: http://suachua.vnn.vn anh vao do xem truoc gia 
ca cuu du lieu va dia chi cu the cua ho nhe.


Anh vao dia chi nay nhe

2005-11-23 Thread hoanglanhuong1980
Em tim duoc dia chi cuu data cho anh roi, anh goi so may nay nhe: 04-
9875709, o do ho chuyen sua chua may vi tinh va chuyen cuu du lieu day. Day 
la dia chi website cua ho: http://suachua.vnn.vn anh vao do xem truoc gia 
ca cuu du lieu va dia chi cu the cua ho nhe.


Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)

2005-11-23 Thread Richard Earnshaw
On Tue, 2005-11-22 at 19:10, Robert Dewar wrote:
> Dave Korn wrote:
> > Robert Dewar wrote:
> > 
> 
> >   Isn't it pretty much implied by point 1, "Not more than one volatile 
> > memory
> > ref appears in the instructions being considered"?  
> 
> No, that allows a volatile reference to be combined with something else.
> I think this is a mistake, because people often think of volatile as
> guaranteeing what Ada would call an atomic access, one instruction
> accessing just the variable and nothing else.

If there are N volatile accesses in the sequence under consideration,
there must be exactly N volatile accesses in the resulting combination
(the only thing I've missed in my list is to say that the volatile
reference can't be eliminated -- I've already restricted N to 1, and
rule 4 already said that the mode of the access couldn't change).

There is, however, a further restriction that I thought of last night. 
This is very hard to maintain generally, and may therefore be a near
show-stopper:

5) The instruction must never need to be restarted after the volatile
access has been accepted by the memory system.

This restriction rules out, for example, using a volatile value as an
input in many floating point operations (since the operations may trap
depending on the values read).

R.


[C++] Should the complexity of std::list::size() be O(n) or O(1)?

2005-11-23 Thread 聂久焘

The C++ standard said Container::size() should have constant complexity
(ISO/IEC 14882:1998, pp. 461, Table 65), while the std::list::size() in
current STL of GCC is defined as { std::distance(begin(), end()); }, whose
complexiy is O(n).
 
Is it a bug?



Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)

2005-11-23 Thread Robert Dewar

Richard Earnshaw wrote:


This restriction rules out, for example, using a volatile value as an
input in many floating point operations (since the operations may trap
depending on the values read).


I don't see this at all. If you have a volatile variable that traps
in this situation, then that's just fine, you get the trap. The trap
cannot occur unless the program is wandering into undefined areas in
any case. Please give an exact scenario here, and explain why you
think the standard or useful pragmatic considerations demand the
treatment you suggest above.


R.





Re: Copies of the GCC repository

2005-11-23 Thread Daniel Berlin
On Wed, 2005-11-23 at 09:39 +0100, Arnaud Charlet wrote:
> > Most svn side operations create subpools for loops that may allocate
> > per-iteration memory due to calls to other functions, etc, and clear it
> > each iteration to avoid such per iteration allocations become too large.
> > Some don't.
> 
> FWIW, the first (and only) time I tried to do a svn diff on lots of
> files (about 700), svn diff never returned: my machine started to swap
> and be unresponsive like hell, and I had a very hard time hitting ^C
> to stop it.

Uh, i've never seen such behavior from *SVN*.

SVN diff requires stat'ing a lot of files currently (and in 1.2.3,
opening and reading a bunch of almost-empty files), but *memory usage*
should be close to nothing.

It'd be really nice if you could reproduce this behavior so i can fix
it.


--Dan



Re: [C++] Should the complexity of std::list::size() be O(n) or O(1)?

2005-11-23 Thread chris jefferson
聂久焘 wrote:
> The C++ standard said Container::size() should have constant complexity
> (ISO/IEC 14882:1998, pp. 461, Table 65), while the std::list::size() in
> current STL of GCC is defined as { std::distance(begin(), end()); }, whose
> complexiy is O(n).
>  
> Is it a bug?
>
>   
This question would be better asked on [EMAIL PROTECTED], the
mailing list of gcc's implementation of the C++ standard library.

This question comes up every so often, in "offical standard speak", the
word "should" has a specific meaning, which is that an implementation is
supposed to do something unless there is a good reason not to.

The reason that size() is O(n) is to allow some of the splice functions
to be more efficient. Basically it's a tradeoff between fast splicing or
fast size.

Note that empty() is O(1), as required by the standard, so if thats what
you want, you should use that.


Compilation of Ada for Arm failed

2005-11-23 Thread Frédéric PRACA
Hello,
I tried to build Ada for the target arm-rtems with the HEAD version (23112005)
and I get the following error :

gmake[4]: Entering directory
`/home/fred/Dev/Audiome/crossCompiler/buildGcc/gcc/ada/rts'
../../xgcc -B../../  -c -g -O2  -W -Wall -gnatpg  a-caldel.adb -o
a-caldel.o:0: warning: 'const' attribute directive ignored
:0: warning: 'nothrow' attribute directive ignored
../../xgcc -B../../  -c -g -O2  -W -Wall -gnatpg  a-calend.adb -o
a-calend.o:0: warning: 'const' attribute directive ignored
:0: warning: 'nothrow' attribute directive ignored
+===GNAT BUG DETECTED======+
| 4.2.0 20051123 (experimental) (arm-unknown-rtems) GCC error: |
| tree check: expected class 'expression', have 'exceptional'  |
|(constructor) in get_base_var, at ipa-utils.c:224 |
| Error detected at a-calend.adb:480:24|
| Please submit a bug report; see http://gcc.gnu.org/bugs.html.|
| Use a subject line meaningful to you and us to track the bug.|
| Include the entire contents of this bug box in the report.   |
| Include the exact gcc or gnatmake command that you entered.  |
| Also include sources listed below in gnatchop format |
| (concatenated together with no headers between files).   |
+==+

Please include these source files with error report
Note that list may not be accurate in some cases,
so please double check that the problem can still
be reproduced with the set of files listed.



raised TYPES.UNRECOVERABLE_ERROR : comperr.adb:380
gmake[4]: *** [a-calend.o] Erreur 1
gmake[4]: Leaving directory
`/home/fred/Dev/Audiome/crossCompiler/buildGcc/gcc/ada/rts'
---

So before submitting a bug report (if necessary), I wanted to know if someone
knows something about it. I successfully built

Fred



Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)

2005-11-23 Thread Richard Earnshaw
On Wed, 2005-11-23 at 11:41, Robert Dewar wrote:
> Richard Earnshaw wrote:
> 
> > This restriction rules out, for example, using a volatile value as an
> > input in many floating point operations (since the operations may trap
> > depending on the values read).
> 
> I don't see this at all. If you have a volatile variable that traps
> in this situation, then that's just fine, you get the trap. The trap
> cannot occur unless the program is wandering into undefined areas in
> any case. Please give an exact scenario here, and explain why you
> think the standard or useful pragmatic considerations demand the
> treatment you suggest above.
> > 
> > R.

Consider a non-load/store machine that has a floating-point operation that 
can add a value in memory to another register:

fadd Rd, Rs, (mem)  // Rd = Rs + (mem)

Now if (mem) is volatile and the value returned is a signalling NaN then
the trap handler has no way to recover that value except by
dereferencing (mem) again.  That means we access (mem) twice.

Or consider any type of 3-operand instruction where both source operands
come from memory.  If the first access is volatile and the second not
(but it page-faults) then, unless the machine has some way to cache the
first access, it will have to be repeated when the page fault handler
returns.

Finally, but probably less likely, consider a machine instruction that
can read multiple values from memory (ARM has one -- ldm).  If an
address in the list is volatile and the list crosses a page boundary,
the instruction may trap part way through execution.  In that case the
OS may have to unwind part of the instruction and retry it -- it can't
safely do that if there is a volatile access in the list.

R.


Re: Compilation of Ada for Arm failed

2005-11-23 Thread Eric Botcazou
> So before submitting a bug report (if necessary), I wanted to know if
> someone knows something about it. 

PR ada/22533.

-- 
Eric Botcazou


Re: Thoughts on LLVM and LTO

2005-11-23 Thread Diego Novillo
On Tuesday 22 November 2005 11:45, Daniel Berlin wrote:

> > Another minor nit is performance.  Judging by SPEC, LLVM has some
> > performance problems.  It's very good for floating point (a 9%
> > advantage over GCC), but GCC has a 24% advantage over LLVM 1.2 in
> > integer code.  I'm sure that is fixable and I only have data for an
> > old release of LLVM
>
> Uh, you are comparing 4 releases ago of LLVM, against the current
> release of gcc, and saying "It doesn't do as well".
>
Yes, that's why I said I needed more work.  I ran with the latest release I 
could find (LLVM 1.6).  I'm not quite sure how to hook up the gfortran FE 
to LLVM, so for SPECfp I could only run the C tests.  Also, at -O3 LLVM 
1.6 fails eon and perlbmk.

On x86 LLVM 1.6 still lags behind GCC in SPECint (7%) but the gap is 
narrower, an excellent sign.  For SPECfp, the difference is similar to 
what it was with 1.2 (LLVM's score is 10% better).

Chris mentioned that the PPC backend is better.  We wouldn't be using 
LLVM's back end, so I guess this is not really a problem.  

Processor:  Intel(R) Pentium(R) 4 CPU 2.26GHz (2260.065 Mhz)
Memory: 1034832 kB
Cache:  512 KB

Before Compiler
Compiler:   gcc version 3.4-llvm 20051104 (LLVM 1.6)
Peak flags: -O3 -Wl,-native-cbe

After Compiler
Compiler:   gcc version 4.1.0 20051117 (experimental)
Peak flags: -O3


SPECint results for peak

Benchmark   Before   After  % diff
 164.gzip   550.30  659.84  + 19.90%
  175.vpr   412.63  424.97  +  2.99%
  176.gcc   726.04  759.82  +  4.65%
  181.mcf   432.00  425.03  -  1.61%
   186.crafty   507.09  680.06  + 34.11%
   197.parser   557.70  610.13  +  9.40%
  252.eon 0.00  575.67  INF
  253.perlbmk 0.00  767.01  INF
  254.gap   726.62  750.56  +  3.29%
   255.vortex   1142.30 833.70  - 27.02%
256.bzip2   469.00  524.18  + 11.77%
300.twolf   488.30  532.43  +  9.04%
 mean   573.20  614.47  +  7.20%


SPECfp result for peak

Benchmark   Before   After  % diff
  168.wupwise 0.00  662.55  INF
 171.swim 0.00  496.77  INF
172.mgrid 0.00  445.58  INF
173.applu 0.00  598.96  INF
 177.mesa   521.26  427.31  - 18.02%
   178.galgel 0.00  351.39  INF
  179.art   366.68  189.75  - 48.25%
   183.equake   838.25  858.94  +  2.47%
  187.facerec 0.00  359.06  INF
 188.ammp   352.94  360.97  +  2.27%
189.lucas 0.00  507.08  INF
191.fma3d 0.00  408.12  INF
 200.sixtrack 0.00  404.31  INF
 301.apsi 0.00  439.52  INF
 mean   487.64  440.16  -  9.74%


Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)

2005-11-23 Thread Robert Dewar

Richard Earnshaw wrote:

Consider a non-load/store machine that has a floating-point operation that 
can add a value in memory to another register:


fadd Rd, Rs, (mem)  // Rd = Rs + (mem)

Now if (mem) is volatile and the value returned is a signalling NaN then
the trap handler has no way to recover that value except by
dereferencing (mem) again.  That means we access (mem) twice.

Or consider any type of 3-operand instruction where both source operands
come from memory.  If the first access is volatile and the second not
(but it page-faults) then, unless the machine has some way to cache the
first access, it will have to be repeated when the page fault handler
returns.


The above cases simply do not occur on any common machine, so perhaps
you would have to do strange things in those cases, but nothing like
the drastic generalization you first gave. Generally even on machines
with these kind of instructions it is better to do separate loads
anyway.


Finally, but probably less likely, consider a machine instruction that
can read multiple values from memory (ARM has one -- ldm).  If an
address in the list is volatile and the list crosses a page boundary,
the instruction may trap part way through execution.  In that case the
OS may have to unwind part of the instruction and retry it -- it can't
safely do that if there is a volatile access in the list.


Well don't use this instruction if one of the addresses is volatile.
That seems right in any case (it seems wrong to do other than generate
a single load/store for volatile anyway, this seems like improper
combining).


R.





Re: Thoughts on LLVM and LTO

2005-11-23 Thread Diego Novillo
On Tuesday 22 November 2005 13:17, Benjamin Kosnik wrote:

> What about compile-time performance?
>
Well, it's hard to say, I have not really used LLVM extensively.  The only 
real data I have is compile times for SPECint:

SPECint build times (secs)

-O2 -O3

GCC 4.1.0 (20051117)354 398
LLVM 1.6 (-Wl,-native-cbe)  802 805

So there appears to be a factor of 2 slowdown in LLVM.  However, I know 
LLVM has a separate GCC invokation.  It seems as if it emitted C code that 
is then compiled with GCC (I'm using -Wl,-native-cbe).

I don't think this would be standard procedure in an integrated LLVM.  
Chris, how would one compare compile times?  Not using -Wl,-native-cbe 
implies emitting bytecode, right?


Re: dfp-branch merge plans

2005-11-23 Thread Joseph S. Myers
On Wed, 23 Nov 2005, Ben Elliston wrote:

> 3. Merge in libcpp and C (only) front-end changes.
> 
> 4. Merge in middle-end changes and internals documentation.

What have you done in the way of command-line options to enable the new 
features?

Specifically:

* Decimal fp constants are already preprocessing numbers in the standard 
syntax - but in standard C their conversion to a token requires a 
diagnostic (at least a pedwarn with -pedantic).  If decimal fp is to be 
usable with -pedantic an option is needed to disable that pedwarn/error.

* There is also an arguable case for diagnosing any use of the new 
keywords if -pedantic, unless such a special option is passed.

* Any choice of TTDT other than "double" (properly, the type specified by 
FLT_EVAL_METHOD, but we don't implement that) would also be incompatible 
with C99 and need specifically enabling.

(Previous versions of the DTR - such as the one linked from svn.html - 
also provided defined behavior for out-of-range conversions between binary 
float and integer types.  This appears to have been removed in the latest 
draft, so no code to implement this is needed.  However, such code would 
also have needed to be conditional unless benchmarking as per bug 21360 
showed no performance impact.)

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: Register Allocation

2005-11-23 Thread Michael Matz
Hi,

On Tue, 22 Nov 2005, Peter Bergner wrote:

> Spill Location Optimizer [page(s) 11]:
> * The IBM iSeries backend has this optimization.  During spilling,
>   it inserts copies to/from spill pseudos (essentially like another
>   register class) which represent the stores/loads from the stack.
>   These spill pseudos can then be dead code eliminated, coalesced
>   and colored (using an interference graph) just like any other
>   normal pseudo.  Normal Chaitin coloring (using k = infinity) does
>   a good job of minimizing the amount of stack space used for
>   spilled pseudos.

This is btw. also done by the new-ra branch.  Instead of spilling to stack 
directly it spills to special new pseudo regs.  The obvious problem with 
that is a phase ordering problem, namely that if you only have pseudo 
stack locations (the pseudo regs in this case) you don't know the final 
insn sequence (e.g. if the final stack offset happens to be 
unrepresentable so that insns are necessary to actually construct the 
stack address for the load/store).  That's why new-ra leaves the stack 
slots as pseudos only for one round, and then assign real stack positions 
to them (and recolors the possibly new insns and affected pseudos).

> Spill Cost Engine [page(s) 26-29]:
> * The register allocator should not be estimating the execution
>   frequency of a basic block as 10^nesting level.  That information
>   should be coming from the cfg which comes from profile data or
>   from a good static profile.  The problem with 10^loop nesting
>   level is that we can overestimate the spill costs for some
>   pseudos.  For example:
>   while (...) {
> 
> if (...)
>   
> else
>  }
>   In the code above, "b"'s spill cost will be twice that of "a",
>   when they really should have the same spill cost.

Nearly.  "b" _is_ more costly to spill, code size wise.  All else being 
equal it's better to spill "a" in this case.  But the cost is of course 
not twice as large, as you say.  I.e. I agree with you that the metric 
should be based exclusively on the BB frequencies attached to the CFG, not 
any nesting level.  Also like in new-ra ;)


Ciao,
Michael.


SVN conversion glitch?

2005-11-23 Thread Jakub Jelinek
Hi!

While doing svn diff, I've noticed
gcc/config/i386/xm-dgux.h
gcc/config/i386/xm-sysv3.h
gcc/config/i386/xm-sun.h
gcc/config/i386/scodbx.h
files popped out of nowhere on the trunk (and through
4.1 branching also on gcc-4_1-branch).
The files according to ChangeLogs were clearly removed back in 2001
and don't appear on gcc-{3_1,3_2,3_3,3_4,4_0}-branch.
Could you please check what exactly happened with these files and
why they show on the trunk and 4.1 branch?

Thanks.

Jakub


Re: Register Allocation

2005-11-23 Thread Andrew MacLeod
On Fri, 2005-11-18 at 22:18 -0800, Ian Lance Taylor wrote:
> Andrew MacLeod <[EMAIL PROTECTED]> writes:

> Secondary/tertiary reloads and reload_in/out patterns are apparently
> subsumed by the Spill Engine.  Porting a target to the new framework
> is going to require providing the Spill Engine with the appropriate
> instructions for storing and loading each native register class,
> including a description of any temporary or scratch registers
> required.  That seems reasonable.  One minor thing that I don't quite
> understand is that bit about reaching a limit of SPILLIDs.  For
> targets with limited displacement, what matters is when you start
> needing larger displacements, which as far as I can see has more to do
> with the overall stack frame size than the number of SPILLIDs.
> 

There is a correlation between the number of SPILLIDs and the amount of
stack space you are going to need. If each spill takes 4 bytes, and
there is 1000 bytes available in the limited displacement pattern
(already taken any fixed requirements out), when SPILLID reaches 250,
you either have to try to compress SPILLIDs by some mechanism (such as
colouring them), or switch to the less efficient pattern which uses
other registers.  Since displacements aren't assigned yet, the number of
SPILLIDs is the only measure there is to decide when to do this. It is
possible that you could commit these spills to memory at this point, if
that served a purpose. 

If the less efficient store doesn't actually have any different register
requirements, there is no need to go through this. The correct insn
would be chosen based on the eventually assigned displacement.  


> 
> Reload inheritance is a complex idea threaded through reload.  The
> goal is to reuse values which happen to already be in registers
> whenever possible.  In particular, to do this for reloads required for
> a single insn.  A typical simple example is 'a++' where 'a' winds up
> living on the stack at a large unrepresentable offset.  You don't want
> to wind up computing the address of 'a' twice, although that is what a
> naive spill code implementation.  That example is too simple, I
> suppose, but it has the right general flavor.
> 

As the spiller steps through the program, it tracks exactly what is in
each hardware register. It also uses lookahead to see how register are
used in the near future. When a value is loaded, every attempt is made
to reuse that value.  This is one of the reasons spilled pseudos are
simply left alone until the spiller sees them. The two uses of the
address calculation of A should already be in the same pseudo when the
spiller sees them. It will simply reuse the value if they are close to
each other and it is possible with the various other register
constraints.  

> Reload inheritance is particularly important on machines with limited
> numbers of registers and limited addressing capability--e.g., SH,
> Thumb, MIPS16.  The current reload implementation needs to do reload
> inheritance as it goes, or it will run out of spill registers in some
> cases.  That is why a reload CSE pass after reload is not enough to
> solve the problem.

Which is one of the reasons the spiller understands how to spill
hardware registers. Sometimes there is no spill register available,
and/or there is a value which is used multiple times close together,
but no register available over the range. It may be better to
temporarily spill a value already assigned to a hardware register for a
period and free up the additional register.  Thats where the lookahead
comes in real handy :-)


> The current reload pass includes general heuristics to handle
> reloading memory addresses.  This code knows things like "if stack
> pointer plus displacement is not a valid memory address, try loading
> the displacement into a register."  Many targets currently rely on
> those heuristics to generate valid code.  I haven't been able to quite
> pin down where this happens in your proposal.  For example, it's easy
> for an address to use the frame pointer and be valid before reload,
> and then for reload to eliminate the frame pointer (in fact, in your
> scheme, what does frame pointer elimination?) and produce an offset
> from the stack pointer which is invalid.  That is, spill code or frame
> pointer elimination can generate invalid address, and something needs
> to fix them up.  Where does that happen, and how?
> 


To be fair, I haven't given register elimination a lot of thought yet. I
was presuming it could be inserted as a pass or as a separate component
of the spiller.  Let me get back to you, I must investigate the
issues :-). A couple of prime examples would be helpful :-)  I'll write
up a new section for it.

> le.
> I don't want to argue that your scheme needs to solve every problem.
> But I think that any attempt to tackle the register allocator should
> think seriously about better integration with the scheduler.  I don't
> see anything about that in your proposal.

There isn't :

Re: Register Allocation

2005-11-23 Thread Andrew MacLeod
On Sun, 2005-11-20 at 01:37 +0100, Steven Bosscher wrote:
> On Thursday 17 November 2005 17:53, Andrew MacLeod wrote:
> > http://people.redhat.com/dnovillo/rable.pdf
> 
> How are the insn annotations and caches you propose different from
> what df.c already does?
> 

I don't know, I haven't really looked at them. They may very well be
similar. I indicated we need them, and if these are suitable for use, we
can either use them or twist them to my own nefarious purposes :-). I
will likely need to add additional information at a minimum.

Andrew



Re: Register Allocation

2005-11-23 Thread Andrew MacLeod
On Tue, 2005-11-22 at 13:26 -0600, Peter Bergner wrote:

> Register Coalescing [page(s) 8]:
> * We will probably need some method for telling the coalescer
>   that a particular copy (or type of copy/copies) should either
>   be completely ignored (ie, do not coalesce it even if it looks
>   coalescable) or to think very hard before it does.  Examples of
>   copies we might want to skip would be copies inserted to satisfy
>   instruction constraints or some type of spilling/splitting code.
>   Examples of copies we might want to think about before blindly
>   coalescing might be pseudo - hard register copies, or copies
>   that would lead to an unacceptable increase in register 
>   constraints.

sure, this wouldn't be hard to do.  that could be flagged right in the
insn annotation of the copy.

> 
> Global Allocation [page(s) 10]:
> * I'd like to keep the design flexible enough (if it's at all
>   possible) in case we want to attempt interprocedural register
>   allocation in the future.

I see no reason why that couldn't be done.

> * I'd also like to allow Classic Chaitin and Briggs algorithms
>   which do iterative spilling (ie, color -> spill -> color ...).
>   This would be beneficial in that it gives us a well known
>   baseline to which all other algorithms/spilling heuristics can
>   be compared to.  It has the added benefit of making publishing
>   easier if your baseline allocator is already implemented.
> 

The very very first cut may well do this. It avoids writing much of
spiller and gets you going. Just spew out stores to spill :-)
Regardless, it would be trivial to do this after the fact when you
already have the pass written.


> Spiller [page(s) 10-11]:
> * If we are doing a single pass Chaitin style allocation, then
>   I agree that the spiller would map all unallocated pseudos
>   into hardware registers.  However, if we are doing a multi
>   pass Chaitin style allocation, then the spiller should not
>   map the unallocated pseudos to hardware registers, but simply
>   insert the required spill code leaving the spilled pseudos as
>   pseudos.  The spiller should be flexible enough to do both.

That would be pretty straightforward. 

> * I can also envision someone wanting to spill only a subset
>   of the unallocated pseudos, so we should handle that scenario.

You could easily do this by flagging pseudos you don't want spill code
generated for.  The spiller would simply ignore those so flagged. Or
vice versa.


> Insn Annotations [page(s) 17-18]:

> * Encoding register pressure summary info as +1, -2 or 0 is fine,
>   but it is not a total solution.  In particular, it doesn't
>   handle register clobbers adequately.  An example would be the

True. we might need an additional value to represent that, or some other
mechanism. I figured we could work out those kinds of details when we
get closer to actually implementing it. The register pressure engine is
a bit further out than some of the rest of them, so I didn't put much
brainpower into it. It could evolve into something else easily. This
just seemed like a quick and dirty way to go.


>   for register classes with distinct subclasses (eg, GPRS and a
>   subset of GPRS capable of acting as index registers), would you
>   have separate counters?

I wasn't planning to track class subranges.  It might be an enhancement
someone may find interesting to try, and the intent is stated (perhaps
subtly) to make sure that adding additional register pressure values is
not difficult.  There is the expectation that at some point we may want
to track FPRs, GPRs and possibly all classes at the same time.  There is
no reason that couldn't be extended to include any particular subset you
found interesting.  This would have to be a separate value.

> 
> Spill Cost Engine [page(s) 26-29]:
> * The register allocator should not be estimating the execution
>   frequency of a basic block as 10^nesting level.  That information
>   should be coming from the cfg which comes from profile data or
>   from a good static profile.  The problem with 10^loop nesting

Absolutely correct. That was simply thrown in as as simple starting
point.  I had forgotten that we now have static/dynamic CFG info always
available, so certainly we use those values rather than an ancient
mechanism that assumed nothing was available.  I will update the
document.

>   level is that we can overestimate the spill costs for some
>   pseudos.  For example:
>   while (...) {
> 
> if (...)
>   
> else
>  }
>   In the code above, "b"'s spill cost will be twice that of "a",
>   when they really should have the same spill cost.

yes, using the CFG info they will be almost the same. B would have a
slightly higher instruction count, so if that is used to break ties, it
will be ev

Re: The actual LLVM integration patch

2005-11-23 Thread Rafael Espíndola
On 11/22/05, Chris Lattner <[EMAIL PROTECTED]> wrote:
> This is a patch vs the Apple branch as of a few weeks ago.  The diff is in
> gcc.patch.txt, the new files are included in the tarball.
apple-local-200502-branch rev 104970 I think.

Rafael


Re: Thoughts on LLVM and LTO

2005-11-23 Thread Chris Lattner

On Wed, 23 Nov 2005, Diego Novillo wrote:


On Tuesday 22 November 2005 13:17, Benjamin Kosnik wrote:


What about compile-time performance?


Well, it's hard to say, I have not really used LLVM extensively.  The only
real data I have is compile times for SPECint:

SPECint build times (secs)

-O2 -O3

GCC 4.1.0 (20051117)354 398
LLVM 1.6 (-Wl,-native-cbe)  802 805

So there appears to be a factor of 2 slowdown in LLVM.  However, I know
LLVM has a separate GCC invokation.  It seems as if it emitted C code that
is then compiled with GCC (I'm using -Wl,-native-cbe).


Wow, only 2x slowdown?  That is pretty good, considering what this is 
doing.  I assume you're timing a release build here, not a debug build.


In any case, the LLVM time above includes the following:
1. An incredibly inefficient compile-time stage that is going away in the
   newly integrated compiler.  This involves producing a giant .ll file,
   writing it to disk (cache) then parsing the whole thing back in.  This
   was a pretty expensive process that existed only to avoid linking LLVM
   into GCC.
2. This time includes *full* linktime IPO (the old llvm-gcc wasn't
   integrated well enough to have -O options :( ).
3. This time includes the time to convert the LLVM code to C, write out a
   really large C file for the entire program, then fork/exec 'gcc -O2' on
   the .c file.

Considering that the slowdown is only a factor of two with all that going 
on, I think that's pretty impressive. :)



I don't think this would be standard procedure in an integrated LLVM.
Chris, how would one compare compile times?  Not using -Wl,-native-cbe
implies emitting bytecode, right?


Correct, if you're timing build times, eliminating the -Wl,-native-cbe 
will give you a sense for how expensive #3 is (which will just leave you 
with LLVM .bc files).  I suspect that it is about 30% or more of the 
llvm-gcc time you report above.  For another data point, you can compile 
with '-Wl,-native' instead of -Wl,-native-cbe which will give you the LLVM 
native X86 backend.  It should be significantly faster and should provide 
another interesting performance datapoint (though admitedly probably not 
very good, due to the X86 backend needing work).


In any case, one of the major motivating factors is reduced compile times.

-Chris

--
http://nondot.org/sabre/
http://llvm.org/


Re: Thoughts on LLVM and LTO

2005-11-23 Thread Diego Novillo
On Wednesday 23 November 2005 13:13, Chris Lattner wrote:

> I assume you're timing a release build here, not a debug build. 
>
Yes, a release build.

> In any case, the LLVM time above includes the following:
> [ ... ]
>
Well, it seems that it's too early to test LLVM, then.  It's both slow and 
integer code performance isn't up to par yet.  I also couldn't test it 
with gfortran.  I'll keep an eye on the apple branch.  Will gfortran work 
on the branch?

Let me know when you folks add the patch so I can build it on my ppc SPEC 
box.  Thanks.


Re: dfp-branch merge plans

2005-11-23 Thread Janis Johnson
On Wed, Nov 23, 2005 at 02:05:03PM +, Joseph S. Myers wrote:
> On Wed, 23 Nov 2005, Ben Elliston wrote:
> 
> > 3. Merge in libcpp and C (only) front-end changes.
> > 
> > 4. Merge in middle-end changes and internals documentation.
> 
> What have you done in the way of command-line options to enable the new 
> features?

Decimal floating point is supported in C for -std=gnu* if GCC is
configured with --enable-decimal-float.  This is the default for
powerpc*-*-linux* and is available for ?86*-linux*.
 
> Specifically:
> 
> * Decimal fp constants are already preprocessing numbers in the standard 
> syntax - but in standard C their conversion to a token requires a 
> diagnostic (at least a pedwarn with -pedantic).  If decimal fp is to be 
> usable with -pedantic an option is needed to disable that pedwarn/error.
> 
> * There is also an arguable case for diagnosing any use of the new 
> keywords if -pedantic, unless such a special option is passed.

The keywords _Decimal32, _Decimal64, and _Decimal128 and decimal float
constants are not recognized for -std=c* and get warnings with -pedantic.
This seems reasonable until the definition of the feature is more stable.

> * Any choice of TTDT other than "double" (properly, the type specified by 
> FLT_EVAL_METHOD, but we don't implement that) would also be incompatible 
> with C99 and need specifically enabling.

We have not implemented translation time data types, which change the
behavior of existing C programs.  We'll look at supporting that part if
it is still in the TR after it has been approved.  We also have not
implemented the precision pragma, which is currently not well-defined.

> (Previous versions of the DTR - such as the one linked from svn.html - 
> also provided defined behavior for out-of-range conversions between binary 
> float and integer types.  This appears to have been removed in the latest 
> draft, so no code to implement this is needed.  However, such code would 
> also have needed to be conditional unless benchmarking as per bug 21360 
> showed no performance impact.)

Agreed.

The DTR describing the feature has not yet been widely reviewed and the
support via decNumber is very slow, so we see the initial support for
decimal floating point as a technology preview.  The DTR has gone
through a number of changes since N1107 and will probably go through
more changes before it is approved.  Currently we support N1150 with a
few exceptions.

Janis


[RFC] fixproto and canadian cross builds

2005-11-23 Thread Paul Brook
I'm having problems building a canadian cross to a target that uses fixproto 
(m68k-elf).  There seems to be some inconsistency in how fix-headers is 
build/run.

In gcc/Makefile.in we have the following comments:

# gen-protos and fix-header are compiled with CC_FOR_BUILD, but they are only
# used in native and host-x-target builds, so it's safe to link them with
# libiberty.a.
...
# This is nominally a 'build' program, but it's run only when host==build,
# so we can (indeed, must) use $(LIBDEPS) and $(LIBS).
build/fix-header$(build_exeext): build/fix-header.o build/scan-decls.o \
...
# We can't run fixproto (it's being built for a different host), but we still
# need to install it so that the user can run it when the compiler is
# installed.
stmp-install-fixproto: fixproto fixhdr.ready

This suggests two options on what's supposed to happen:
(a) fix-header should actually be compiled for the "host" environment. If 
host==build we run it, otherwise we just install it and let the user run it 
on the build system.
(b) fix-header should be compiled for the "build" environment. This currently 
isn't possible for canadian crosses because it uses bits of code only 
compiled for the host. We need to fix this, and there's no point trying to 
install it because it won't run on the host system (ie. we can remove all the 
stmp-install-fixproto stuff).
(c) fixproto is only for crufty old systems no-one really cares about. It 
should be disabled by default, and documented to not work for canadian 
crosses.

I don't really know what fixproto does, but I'm guessing the "right" answer is 
(a) or (c).

Thoughts/comments/suggestions?

Paul


Accidentally on the list....

2005-11-23 Thread Eric J. Goforth

Hi all;

I misunderstood what this list was for and ended up on it.  No 
instructions on the messages on how to get off and I don't' remember 
where I went to get on.. Can someone point me?  Thanks.


Eric



Re: Accidentally on the list....

2005-11-23 Thread Gerald Pfeifer
On Wed, 23 Nov 2005, Eric J. Goforth wrote:
> I misunderstood what this list was for and ended up on it.  No 
> instructions on the messages on how to get off and I don't' remember 
> where I went to get on.. Can someone point me?  Thanks.

Every message on this list carries the following headers:

  List-Unsubscribe: 
  List-Archive: 
  List-Post: 
  List-Help: 

Please follow the instructions at the List-Help URL or the 
List-Unsubscribe URL.

Gerald


Re: Register Allocation

2005-11-23 Thread Ian Lance Taylor
Andrew MacLeod <[EMAIL PROTECTED]> writes:

> > The current reload pass includes general heuristics to handle
> > reloading memory addresses.  This code knows things like "if stack
> > pointer plus displacement is not a valid memory address, try loading
> > the displacement into a register."  Many targets currently rely on
> > those heuristics to generate valid code.  I haven't been able to quite
> > pin down where this happens in your proposal.  For example, it's easy
> > for an address to use the frame pointer and be valid before reload,
> > and then for reload to eliminate the frame pointer (in fact, in your
> > scheme, what does frame pointer elimination?) and produce an offset
> > from the stack pointer which is invalid.  That is, spill code or frame
> > pointer elimination can generate invalid address, and something needs
> > to fix them up.  Where does that happen, and how?
> > 
> 
> 
> To be fair, I haven't given register elimination a lot of thought yet. I
> was presuming it could be inserted as a pass or as a separate component
> of the spiller.  Let me get back to you, I must investigate the
> issues :-). A couple of prime examples would be helpful :-)  I'll write
> up a new section for it.

The way gcc currently works, register elimination can not always be a
separate pass (naturally that might be different in your proposal).
The MIPS16, for example, requires a frame pointer if the frame size
does not fit in a signed 16-bit offset.  But the frame size can change
during the spilling process.  So you when you start spilling you might
think that you can use the stack pointer for all references to the
stack frame, and you might use the frame pointer as a general
register.  But then spilling might cause the stack frame to be 32768
bytes or larger.  So then you need to do the spilling (or, really,
global register allocation) all over again, but this time you can't
use the frame pointer.

However, in fairness, for most targets register elimination is based
on factors that don't change during the register allocation process,
like whether the function is a leaf or not.  In those cases you can
decide at the start of the spilling process exactly which registers
you can eliminate, and you could eliminate them in a separate pass.

Either way, register elimination can cause addresses which were valid
to become invalid, typically because valid offsets from the frame
pointer become invalid offsets from the stack pointer.  So that needs
to be cleaned up somewhere.  In current gcc, it's handled inside
reload, because it is a special case of the general case of pushing
constants into memory due to register constraints, or rematerializing
values from the stack or from global variables.

Waving my hands wildly, it seems to me that it's not quite enough to
have descriptions of how to spill every register class.  It seems to
me that you also need descriptions of how to handle each addressing
mode, and in particular how to convert invalid addresses generated
during the register elimination or spilling process into valid
addresses.  This is straightforward for most CPUs, but can get
moderately ugly for something like 68020 (where you have a lot of
choices) or SH (where practically everything needs to use r0).


> Scheduling should be able to use heuristics and perhaps some of the
> tools of the allocator (such as register pressure and number of register
> available) to prevent itself from being too stupid if so desired.

The way gcc currently works, it's really hard for the scheduler to
have decent, portable, register pressure heuristics, because we
haven't done instruction selection and we haven't generated spill
code.  I spent a lot of time optimizing for XScale, and having reload
insert an unexpected register spill can be a 10% performance hit.

I think that what I would like to see out of the scheduler would be
some way to say things like "if you find you have to spill at
instruction X, try moving instruction Y from before instruction X to
after instruction X and see if that gives you better register
allocation."

But this is pie in the sky stuff anyhow.

Thanks for your reply.

Ian


Re: Register Allocation

2005-11-23 Thread Peter Bergner
On Wed, 2005-11-23 at 15:05 +0100, Michael Matz wrote:
> > Spill Cost Engine [page(s) 26-29]:
> > * The register allocator should not be estimating the execution
> >   frequency of a basic block as 10^nesting level.  That information
> >   should be coming from the cfg which comes from profile data or
> >   from a good static profile.  The problem with 10^loop nesting
> >   level is that we can overestimate the spill costs for some
> >   pseudos.  For example:
> > while (...) {
> >   
> >   if (...)
> > 
> >   else
> >  > }
> >   In the code above, "b"'s spill cost will be twice that of "a",
> >   when they really should have the same spill cost.
> 
> Nearly.  "b" _is_ more costly to spill, code size wise.  All else being 
> equal it's better to spill "a" in this case.  But the cost is of course 
> not twice as large, as you say.  I.e. I agree with you that the metric 
> should be based exclusively on the BB frequencies attached to the CFG, not 
> any nesting level.  Also like in new-ra ;)

The spill cost for a pseudo in a classic Chaitin/Briggs allocator does
not take number of spill instructions inserted into account, so "b"'s
spill cost would be twice that of "a" if we were to use 10^nesting
level.  That said, I think we're all in agreement that using basic
block frequencies from the cfg is the correct thing to do and that
taking static spill instruction counts into account is a good idea
which Andrew's proposal does by using it as a tie breaker.

I assume it goes without saying that when using -Os, spill cost will
be used as the tie breaker when two pseudos have the same static spill
instruction counts.

Peter






Creating a partial mirror of the repository with SVK

2005-11-23 Thread Ludovic Brenta
I've read the wiki page that explains how to mirror GCC's repository
using SVK, and I would like to pick up just the parts I need so I can
keep the size of the mirror below 4 Gb due to limited disk space.

Specifically, I need just a few branches: gcc_3_4_branch,
gcc_4_0_branch, gcc_4_1_branch, and trunk.  Also I only want to build
with --enable-languages=c,c++,ada.  In particular I'd like not to
mirror java, gfortran, objc, or treelang.

Is this possible?  How?

(Background: 2001-vintage laptop, 20 Gb hard drive, 6 Gb free space.
I want to do diffs between branches and revisions while on the train,
and I am unwilling to buy a new hard drive and reinstall everything
just now).

-- 
Ludovic Brenta.



Re: [RFC] fixproto and canadian cross builds

2005-11-23 Thread Mark Mitchell
Paul Brook wrote:

> This suggests two options on what's supposed to happen:
> (a) fix-header should actually be compiled for the "host" environment. If 
> host==build we run it, otherwise we just install it and let the user run it 
> on the build system.

I think this is the right option, though, probably, we should be using a
build->target fixproto, rather than doing nothing.  (Just like we run
the build->target compiler to build libgcc.)

In any case, I'd imagine that whatever do for fixincludes also applies
to these programs; it seems like it should be built and run in the same way.

For Canadian crosses, we have:

if test "x$TARGET_SYSTEM_ROOT" = x; then
if test "x$STMP_FIXPROTO" != x; then
  STMP_FIXPROTO=stmp-install-fixproto
fi
fi

I'm not sure what the TARGET_SYSTEM_ROOT check is doing there; I'd think
that ought to be unconditional, given the current Makefile.

> (c) fixproto is only for crufty old systems no-one really cares about. It 
> should be disabled by default, and documented to not work for canadian 
> crosses.

Hmm.  A lot of the *-elf targets have use_fixproto=yes in config.gcc,
which somewhat surpises me; I'd have thought newlib didn't require that.

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Build using --with-gmp and shared libraries

2005-11-23 Thread François-Xavier Coudert
>>> Testing done on i686-linux (built with --languages=c,fortran and
>>> a shared libgmp in /foo/bar, and regtested).
>>> OK for mainline? OK for 4.0?

ping**3, build machinery maintainers in Cc.

This patch makes --with-gmp and --with-mpfr similar to --with-as and
others, where you don't need to have the as program in your PATH if
explicitly specified.

FX


Re: SVN conversion glitch?

2005-11-23 Thread Ian Lance Taylor
Jakub Jelinek <[EMAIL PROTECTED]> writes:

> While doing svn diff, I've noticed
> gcc/config/i386/xm-dgux.h
> gcc/config/i386/xm-sysv3.h
> gcc/config/i386/xm-sun.h
> gcc/config/i386/scodbx.h
> files popped out of nowhere on the trunk (and through
> 4.1 branching also on gcc-4_1-branch).
> The files according to ChangeLogs were clearly removed back in 2001
> and don't appear on gcc-{3_1,3_2,3_3,3_4,4_0}-branch.
> Could you please check what exactly happened with these files and
> why they show on the trunk and 4.1 branch?

Those files were broken in the old CVS repository.  Offhand I'm not
sure why.  They were in the Attic, which means that they did not
appear in a normal mainline checkout, but they were not marked as
dead.  That means that they would reappear in some cases (e.g., rdiff
between different dates), but they were not tagged when creating
branches so they do not appear on a branch.

I think what happened is that Kaveh deleted them on 2001-03-08, and
then Zack checked in some changes to them on 2001-03-09.  In doing
this Zack accidentally resurrected them, but I don't know why they
stayed in Attic.  Actually, scodbx.h is different: it was deleted by
Zack on 2001-05-08, and accidentally resurrected by Jason on
2001-11-15.

Evidently cvs2svn trusted the markings in the file rather than the
fact that they were in the Attic.  Shame!

I checked the whole old CVS repository on gcc.gnu.org, and those four
were the only files with this problem.

I have now deleted these files in the SVN repository, to restore the
expected status.

Ian


Re: [RFC] fixproto and canadian cross builds

2005-11-23 Thread Ian Lance Taylor
Mark Mitchell <[EMAIL PROTECTED]> writes:

> Hmm.  A lot of the *-elf targets have use_fixproto=yes in config.gcc,
> which somewhat surpises me; I'd have thought newlib didn't require that.

Nathanael changed the default here:

2003-09-30  Nathanael Nerode  <[EMAIL PROTECTED]>

* config.gcc: Default use_fixproto to 'no'.

He changed every target which didn't explicitly have fixproto=no to
say fixproto=yes.

So the question is why, before his change, those *-elf targets didn't
have an explicit fixproto=no, and I'm sure the answer is simply
laziness and/or lack of knowledge.  fixproto is more or less a no-op
when run on modern header files, so it's not like anybody would notice
anything even in cases where it did run.

Ian


Re: [RFC] fixproto and canadian cross builds

2005-11-23 Thread Mark Mitchell
Ian Lance Taylor wrote:

> So the question is why, before his change, those *-elf targets didn't
> have an explicit fixproto=no, and I'm sure the answer is simply
> laziness and/or lack of knowledge.  fixproto is more or less a no-op
> when run on modern header files, so it's not like anybody would notice
> anything even in cases where it did run.

That's comforting.  I'd rather hoped that newlib wouldn't require
fixing...  So, Paul, perhaps the short answer is "turn it off for 68k
and check that it doesn't seem too broken."

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Default arguments and FUNCTION_TYPEs

2005-11-23 Thread Gabriel Dos Reis

Hi,

  In the C++ front end, default arguments are recorded in
FUNCTION_TYPEs intead of being part of the FUNCTION_DECLs.  What are
the reasons for that?

-- Gaby


Re: Default arguments and FUNCTION_TYPEs

2005-11-23 Thread Mark Mitchell
Gabriel Dos Reis wrote:
> Hi,
> 
>   In the C++ front end, default arguments are recorded in
> FUNCTION_TYPEs intead of being part of the FUNCTION_DECLs.  What are
> the reasons for that?

There used to be an extension that allowed default arguments on function
pointer types.  We agreed to kill it, although I don't know if it was
actually removed.  If that's been done, there's no longer any reason.

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


overcoming info build failures

2005-11-23 Thread Ben Elliston
I tracked this build problem of mine down.  I expect others will
experience it, too, hence this posting.  If you're building from
clean, you won't have this problem.

Mark Mitchell's @file documentation change adds a @set directive to
gcc-vers.texi in the build directory, but that file only depends on
DEV-PHASE and BASE-VER, so it will never be correctly rebuilt using
the new make rule.  Just deleting it will remedy the problem.

Cheers, Ben



Re: overcoming info build failures

2005-11-23 Thread Mark Mitchell
Ben Elliston wrote:
> I tracked this build problem of mine down.  I expect others will
> experience it, too, hence this posting.  If you're building from
> clean, you won't have this problem.

Sorry about that!

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Default arguments and FUNCTION_TYPEs

2005-11-23 Thread Gabriel Dos Reis
Mark Mitchell <[EMAIL PROTECTED]> writes:

| Gabriel Dos Reis wrote:
| > Hi,
| > 
| >   In the C++ front end, default arguments are recorded in
| > FUNCTION_TYPEs intead of being part of the FUNCTION_DECLs.  What are
| > the reasons for that?
| 
| There used to be an extension that allowed default arguments on function
| pointer types.  We agreed to kill it, although I don't know if it was
| actually removed.  If that's been done, there's no longer any reason.

Great!  I seem to remember it was killed (and I would say you killed
it but I may be wrong.)  I'll investigate.

Assuming the extension was gone, do you see a reason we not move the
default arguments to FUNCTION_DECLs and have FUNCTION_TYPEs use
TREE_VEC instead of TREE_LIST to hold the parameter-type list?

-- Gaby


Re: Default arguments and FUNCTION_TYPEs

2005-11-23 Thread Mark Mitchell
Gabriel Dos Reis wrote:

> Assuming the extension was gone, do you see a reason we not move the
> default arguments to FUNCTION_DECLs and have FUNCTION_TYPEs use
> TREE_VEC instead of TREE_LIST to hold the parameter-type list?

Both things sound OK to me.

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: overcoming info build failures

2005-11-23 Thread Alan Modra
On Thu, Nov 24, 2005 at 11:56:32AM +1100, Ben Elliston wrote:
> I tracked this build problem of mine down.  I expect others will
> experience it, too, hence this posting.  If you're building from
> clean, you won't have this problem.
> 
> Mark Mitchell's @file documentation change adds a @set directive to
> gcc-vers.texi in the build directory, but that file only depends on
> DEV-PHASE and BASE-VER, so it will never be correctly rebuilt using
> the new make rule.  Just deleting it will remedy the problem.

Oh, yes, there is a similar problem when building binutils.  I found I
needed to delete ${srcdir}/gas/doc/asconfig.texi and
${srcdir}/ld/configdoc.texi.  "make clean" doesn't delete these files
for you.

-- 
Alan Modra
IBM OzLabs - Linux Technology Centre


Successfull build & install of gcc-4.0.2 on MacOS-X 10.3.9-520.19

2005-11-23 Thread william . franck

Hi all,

on PowerPC G4 with MacOS-X 10.3.9 (powerpc-apple-darwin7.9.0 )

build from :
Reading specs from /usr/libexec/gcc/darwin/ppc/3.3/specs
Thread model: posix
gcc version 3.3 20040913 (GNAT for Mac OS X build 1650)

with Apple's cctools 590.12


New 4.0.2 gcc -v  output :
Using built-in specs.
Target: powerpc-apple-darwin7.9.0
Configured with: /users/william/dev/gcc/gcc-4.0.2/configure
--enable-languages=c,c++,ada,java,objc
--disable-nls
--enable-threads=posix
--disable-multilib
--enable-libada
--program-suffix=-4.0.2_590
--prefix=/usr/local/gcc/4.0.2_590
--bindir=/usr/bin/gcc-4.0.2_590
--libexecdir=/usr/libexec/gcc/darwin/ppc/4.0.2_590
--libdir=/usr/lib/gcc/darwin/ppc/4.0.2_590
--includedir=/usr/include/gcc/darwin/4.0.2_590
--exec-prefix=/usr/local/gcc/darwin/ppc/4.0.2_590
Thread model: posix
gcc version 4.0.2


New 4.0.2 gnat -v  output :
GNAT 4.0.2
Copyright 1996-2005 Free Software Foundation, Inc.


GNAT ACATS
=== acats Summary ===
# of expected passes2317
# of unexpected failures3
*** FAILURES: c250002 c954025 cxaca01



__thread and builtin memcpy() bug

2005-11-23 Thread Frank Cusack

filed as 

$ cat bug.c
#include  /* memcpy() */

static
#ifdef SHOWBUG
__thread
#endif
int foo[2];

void
bug(void)
{
 int bar;

 (void) memcpy(&foo[0], &bar, sizeof(bar));
#ifdef SHOWBUG
 (void) memcpy(&foo[1], &bar, sizeof(bar));
#endif
}
$ gcc -g -O2 -c -DSHOWBUG  bug.c
bug.c: In function `bug':
bug.c:18: error: unrecognizable insn:
(insn:HI 15 14 17 0 bug.c:16 (set (reg/f:SI 64)
   (const:SI (plus:SI (symbol_ref:SI ("foo") [flags 0x22] )
   (const_int 4 [0x4] -1 (nil)
   (nil))
bug.c:18: internal compiler error: in extract_insn, at recog.c:2083
Please submit a full bug report,
with preprocessed source if appropriate.
See http://bugzilla.redhat.com/bugzilla> for instructions.
The bug is not reproducible, so it is likely a hardware or OS problem.
$

- must use -O2 to see the bug
- -fno-builtin avoids the bug
- only shows up with 2 memcpy()'s in a row
- only shows up with __thread
- gcc-3.4.4-2.fc3
- the bug is quite reproducible, why does gcc say otherwise?

-frank


Re: Default arguments and FUNCTION_TYPEs

2005-11-23 Thread Nathan Sidwell

Mark Mitchell wrote:

Gabriel Dos Reis wrote:


Hi,

 In the C++ front end, default arguments are recorded in
FUNCTION_TYPEs intead of being part of the FUNCTION_DECLs.  What are
the reasons for that?



There used to be an extension that allowed default arguments on function
pointer types.  We agreed to kill it, although I don't know if it was
actually removed.  If that's been done, there's no longer any reason.


I took it out the back and shot it.

The obvious place is on the DECL_INITIAL of the PARM_DECLs, but I don't think 
they exist until the function is defined.


nathan

--
Nathan Sidwell::   http://www.codesourcery.com   :: CodeSourcery LLC
[EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk



Re: Default arguments and FUNCTION_TYPEs

2005-11-23 Thread Nathan Sidwell

Gabriel Dos Reis wrote:


Assuming the extension was gone, do you see a reason we not move the
default arguments to FUNCTION_DECLs and have FUNCTION_TYPEs use
TREE_VEC instead of TREE_LIST to hold the parameter-type list?


you could probably use a VEC(tree), which I think would be even better :)

nathan

--
Nathan Sidwell::   http://www.codesourcery.com   :: CodeSourcery LLC
[EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk