--target=v850-unknown-elf for c, c++ and java?
Hi, i try to set up an extended tool chain for --target=v850-unknown-elf. I configured and installed binutils-2.16, that went fine. Configuring and building gcc-3.4.4, gcc-4.0.1 all fail at libstdc++-v3. It seems that there is no "atomic operation provided for this system": configure: WARNING: No native atomic operations are provided for this \ platform. configure: WARNING: They cannot be faked when thread support is disabled. configure: WARNING: Thread-safety of certain classes is not guaranteed. configure: error: No support for this host/target combination. make: *** [configure-target-libstdc++-v3] Fehler 1 Do i need to provide other options or add other packages? Is there a way to compile c, c++ and java for target v850-unknown-elf? Best regards, Torsten.
Serious performance regression on Jul 29
Hi, a few days ago I noticed that the current mainline produces much worse code for one of my time-critical codes than it did a few weeks ago. After some testing I found out that the regression was introduced into CVS between the timestamps "-D 20050729 22:00:00 UT" and "-D 20050729 23:00:00 UT", so it appears that it was caused by Jan Hubicka's patch from http://gcc.gnu.org/ml/gcc-patches/2005-07/msg02021.html I didn't have enough time so far to reduce the testcase much, but maybe it is already helpful to someone in its current state. The hot spot of the code is the strange loop in lines 134-139 of alm_map_tools_orig.cc. Yes, I know it looks really ugly and would welcome any hint how to write this more elegantly without losing efficiency :) Here are the results on a 3GHz Pentium 4 old compiler: ~/tmp/tmp2>g++ -O3 -march=pentium4 -mfpmath=sse testcase.cc ~/tmp/tmp2>time ./a.out 14.250u 0.020s 0:14.27 100.0% 0+0k 0+0io 205pf+0w new compiler: ~/tmp/tmp2>g++ -O3 -march=pentium4 -mfpmath=sse testcase.cc ~/tmp/tmp2>time ./a.out 22.430u 0.030s 0:22.46 100.0% 0+0k 0+0io 205pf+0w Both compilers have the same "g++ -v" output: ~/tmp/tmp2>g++ -v Using built-in specs. Target: i686-pc-linux-gnu Configured with: /scratch/gcc/configure --quiet --prefix=/afs/mpa/data/martin/ugcc --enable-languages=c++ --enable-mapped-location --disable-checking Thread model: posix gcc version 4.1.0 20050729 (experimental) Should I open a PR? Cheers, Martin -- testcase.tar.gz Description: application/tar-gz
Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VA
Mike Stump wrote: On Aug 12, 2005, at 3:45 PM, Laurent GUERBY wrote: Isn't it possible to attach some information on a comparison statement that tells code generation never to never optimize away this particular comparison even if it seems to be able to prove it is always true or false? Cough, hack, ick. i really think the better hack is one that says do not propagate type range info through a conversion, or perhaps says "drop any information you think you have deduced about the range of this expression, if either a) that info comes from type range analysis or b) u do not know where it came from
Re: Serious performance regression on Jul 29
> > Should I open a PR? Yes > > Cheers, > Martin
Problems with bootstrapping 4.0.1
I have been having comparison errors while building a native 4.0.1 compiler for my Fedora Core 4 system. I checked the flags for a file I randomly chose, c-pragma.c, and the flags don't differ from initial build of xgcc to stage2. I have included a tarball of the object files for c-pragma.c and a c-pragma-make.err file containing the make outputs when c-pragma.c was compiled. Has any one else experienced this problem on Fedora Core 3 and/or have a solution to the problem? Surprisingly, I have been able to bootstrap the latest gcc on the cvs without any comparison problems, though (since 4.0.2 is prerelease) make check reported a whole bunch of problems. I had to rebuild my system due to an "init" malfunction, so I will have to look on my backups to see if I can find the testcases for submission. c-pragma-4.0.1.tar.bz2 Description: application/bzip-compressed-tar --- WITHOUT OPTIMIZATION --- pre-stage build: gcc -c -g -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros -Wold-style-definition-DHAVE_CONFIG_H-I. -I. -I../../gcc -I../../gcc/. -I../../gcc/../include -I../../gcc/../libcpp/include ../../gcc/c-pragma.c -o c-pragma.o stage1/xgcc -Bstage1/ -B/usr/i686-pc-linux-gnu/bin/ -c -g -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros -Wold-style-definition -Wno-error -DHAVE_CONFIG_H -I. -I. -I../../gcc -I../../gcc/. -I../../gcc/../include -I../../gcc/../libcpp/include ../../gcc/c-parse.c -o c-parse.o stage2/xgcc -Bstage2/ -B/usr/i686-pc-linux-gnu/bin/ -c -g -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros -Wold-style-definition -Wno-error -DHAVE_CONFIG_H -I. -I. -I../../gcc -I../../gcc/. -I../../gcc/../include -I../../gcc/../libcpp/include ../../gcc/c-parse.c -o c-parse.o
Re: Ada character types : tree code and DW_AT_encoding
> A possible way to solve this problem is to add a single-bit flag to > INTEGER_TYPE nodes that indicates whether this is actually a character > type. Then dwarf2out.c could just check the flag to determine what > debug info to emit. It looks like we have a number of flag bits that > aren't being used in type nodes. This is much better than trying to do > string matches against type names to determine what is a character type. We already have TYPE_STRING_FLAG used on array types. Maybe it would it make sense to use that? Paul
Re: Question on updating ssa for virtual operands (PR tree-optimization/22543)
Hello, > > The other thing we could try to do is put virtual variables in loop-closed- > > form, at least just before the vectorizer, and at least just for some > > loops. Does this sound reasonabale? (By the way, why don't we keep virtual > > variables in loop-closed-form?) > > We used to, nobody could come up with a good reason to keep doing it, so > we stopped. there were couple of good reasons (consistency, faster updating after loop unrolling and loop header copying). However, LCSSA for virtual operands is memory expensive and this overweighted the advantages (in particular, slower updating during loop manipulations is compensated by everything else being faster). Zdenek
Re: Serious performance regression on Jul 29
On Sat, Aug 13, 2005 at 09:40:11AM -0400, Daniel Berlin wrote: > > > > > Should I open a PR? > Yes OK, this is now bug #23378. Please let me know if you need more information. Cheers, Martin
gcc-4.1-20050813 is now available
Snapshot gcc-4.1-20050813 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.1-20050813/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.1 CVS branch with the following options: -D2005-08-13 17:43 UTC You'll find: gcc-4.1-20050813.tar.bz2 Complete GCC (includes all of below) gcc-core-4.1-20050813.tar.bz2 C front end and core compiler gcc-ada-4.1-20050813.tar.bz2 Ada front end and runtime gcc-fortran-4.1-20050813.tar.bz2 Fortran front end and runtime gcc-g++-4.1-20050813.tar.bz2 C++ front end and runtime gcc-java-4.1-20050813.tar.bz2 Java front end and runtime gcc-objc-4.1-20050813.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.1-20050813.tar.bz2The GCC testsuite Diffs from 4.1-20050806 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.1 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: Ada character types : tree code and DW_AT_encoding
Paul Brook wrote: We already have TYPE_STRING_FLAG used on array types. Maybe it would it make sense to use that? That sounds like an excellent choice. dbxout.c and dwarf2out.c already check TYPE_STRING_FLAG to distinguish strings from arrays. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: bubblestrap on the 3.4 branch?
Christian Joensson wrote: configure: loading cache ./config.cache configure: error: `LDFLAGS' was not set in the previous run configure: error: changes in the environment can compromise the build configure: error: run `make distclean' and/or `rm ./config.cache' and start over This happens sometimes because recursive configure invocations can have different variables set than recursive make invocations causing slightly different config.cache files to be created depending on how exactly configure was invoked in a subdirectory. So if you configure, cvs update, and then type make forcing a recursive make to invoke configure in a subdir previously configured via a recursive configure, you may get an error. Just delete the subdir config.cache file and type make again. Some patches have been added to gcc-4 to try to fix this, though new errors in this area sometimes creep back in. See for instance the following ChangeLog entries in the toplevel ChangeLog file 2004-04-15 James E Wilson <[EMAIL PROTECTED]> 2004-05-25 Daniel Jacobowitz <[EMAIL PROTECTED]> -- Jim Wilson, GNU Tools Support, http://www.specifix.com
DFA recognizer
Hi Everyone, I am adding a DFA scheduler for OpenRISC Processor in GCC. (I have not changed anything else). I don't see a difference in assembly at all. I would like to know how to make it recognize that there is a DFA scheduler. I have already did the following line (include "OpenRISC_DFA") I know it is including it and compiling it with the rest of hte GCC Source code. Any help is highly appreciated. Thanking You, Yours Sincerely, Balaji V. Iyer. PS. CC's appreciated.
Re: DFA recognizer
"Balaji V. Iyer" <[EMAIL PROTECTED]> writes: >I am adding a DFA scheduler for OpenRISC Processor in GCC. (I have not > changed anything else). I don't see a difference in assembly at all. I > would like to know how to make it recognize that there is a DFA > scheduler. Which sources are you working with? In 3.4, you need to define the target hook TARGET_SCHED_USE_DFA_PIPELINE_INTERFACE and have it return a non-zero value. With newer sources, just make sure that INSN_SCHEDULING is defined in the generated file insn-attr.h. And, of course, compile with -O2 or -fschedule-insns. You can compile with -dS -dR and look at the debugging dump files. You can use -fsched-verbose=N to get more detailed information. Ian
Re: [GCC 4.2 Project] Omega data dependence test
Joe Buck wrote: > The problem with using time as a cutoff is that you then get results that > can't be reproduced reliably. Better to count something that is a feature > of the algorithm, e.g. number of executions of some inner loop, number of > nodes visited, or the like, On the other hand, it is not based on such features that you'll be able to provide a watermark on time and space... Having guarantees on compile time and space is probably what some users will want instead of yet another bunch of --param max-foo-nodes. I'd like to ask GCC users in general: how many are using these params? Why not having instead a set of flags that limit the resources allowed for each "unnecessary" (to be defined...) part of the compiler? For example, I'd like a guarantee that any tree level optimizer will stop after at most 5 seconds and at most 300M of garbage: you'd say, -fbudget-time=5 and -fbudget-space=300M instead of having to deal with some obscure params. > so that all users get the same results. I see your point: we'll have bug reports that will be difficult to reproduce. I have not yet thought at a solution for this one, but there should be some practical way to make bugs deterministic again, otherwise we'll just step into a Schrodinger box, and that's a Bad Thing. seb
current 4.0 branch doesn't compile
/home/gj/Projects/gcc/build/gcc/xgcc -B/home/gj/Projects/gcc/build/gcc/ -B/usr/local/gcc4.0/i686-pc-linux-gnu/bin/ -B/usr/local/gcc4.0/i686-pc-linux-gnu/lib/ -isystem /usr/local/gcc4.0/i686-pc-linux-gnu/include -isystem /usr/local/gcc4.0/i686-pc-linux-gnu/sys-include -O2 -DIN_GCC-W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -I. -I. -I../../gcc/gcc -I../../gcc/gcc/. -I../../gcc/gcc/../include -I../../gcc/gcc/../libcpp/include -g0 -finhibit-size-directive -fno-inline-functions -fno-exceptions -fno-zero-initialized-in-bss -fno-unit-at-a-time -fno-omit-frame-pointer \ -c ../../gcc/gcc/crtstuff.c -DCRT_BEGIN \ -o crtbegin.o In file included from ../../gcc/gcc/tsystem.h:104, from ../../gcc/gcc/crtstuff.c:64: /usr/include/stdlib.h:395: internal compiler error: in cgraph_mark_inline_edge, at cgraphunit.c:1129 Please submit a full bug report, with preprocessed source if appropriate. Any one interested in more details, or is it enough information ? -- Vercetti
Re: Problems with bootstrapping 4.0.1
Kevin McBride wrote: I have been having comparison errors while building a native 4.0.1 compiler for my Fedora Core 4 system. Running cmp c-pragma.o stage2/c-pragma.o on your provided files says that they identical. If you are getting comparison failures on these files, then perhaps your "cmp" program is broken. Or perhaps you included the wrong files to look at. Your makefile output is a little odd, as you give the compile line for c-pragma.o first, and then the compile lines for c-parse.o. Probably a simple cut and paste error. This is probably not relevant anyways. It is highly unlikely that there is a problem with the gcc command line args during stage2 or stage3. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Question of 2nd instruction scheduling pass
Ling-hua Tseng wrote: > Are there any ways to tell GCC that don't group an jump_insn with > other insns when structural hazard occured? Probably multiple ways, depending on what exactly the problem is. I'd suggest using -da -fsched-verbose=2 and looking at the scheduling info printed in the sched dumps to see what is going on. You should get a clue to what is wrong there. You can also try debugging the scheduler code. Grepping for TImode shows that it is set in schedule_insn in haifa-sched.c, and then you can work backwards from there to figure out why it doesn't get set in your case. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: --target=v850-unknown-elf for c, c++ and java?
Torsten Mohr wrote: configure: WARNING: No native atomic operations are provided for this \ platform. configure: WARNING: They cannot be faked when thread support is disabled. configure: WARNING: Thread-safety of certain classes is not guaranteed. These are just warnings, and won't stop the build. This indicates some porting work that hasn't been done for the v850 target yet. configure: error: No support for this host/target combination. Most likely this means that you failed to provide a target C library. libstdc++ can't be built without one. Since you failed to give any info about how you are configuring and building the cross compiler, I can't say what you did wrong. I can only point you at an example that gets it right. See http://gcc.gnu.org/simtest-howto.html -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Question of the suitable time to call `free_bb_for_insn()'
I'm porting the GCC 4.0.2 (2005-08-11 snapshot) to a new VLIW architecture. I figured out the `free_bb_for_insn()' is called before the reorg pass, and I would like to use the CFG in the reorg pass for a reason. The reason is: I would like to change flag_schedule_insns_after_reload to 0 by the macro OVERRIDE_OPTIONS if it was set, and then I would like to call the sched2 pass in some location of the hook TARGET_MACHINE_DEPENDENT_REORG. Perhaps I will manually do some instruction scheduling in the reorg pass in the future. So I have two questions: 1. Is it safe to move the line `free_bb_for_insn ();' to the next line of `rest_of_handle_machine_reorg ();' ? 2. If it is safe, would the GCC team like to move it to there for allowing other ones can use CFG info in the reorg pass? By the way, I noticed that the ia64 port did something which is similar to mine. But it do some effort for recoding something before the reorg pass. Moreover, it's forced to call the `schedule_ebbs()' however(I'd like to call `schedule_insns()'). Thanks a lot.
Re: [GCC 4.2 Project] Omega data dependence test
On Sun, 2005-08-14 at 01:12 +0200, Sebastian Pop wrote: > Joe Buck wrote: > > The problem with using time as a cutoff is that you then get results that > > can't be reproduced reliably. Better to count something that is a feature > > of the algorithm, e.g. number of executions of some inner loop, number of > > nodes visited, or the like, > > On the other hand, it is not based on such features that you'll be > able to provide a watermark on time and space... Having guarantees on > compile time and space is probably what some users will want instead > of yet another bunch of --param max-foo-nodes. Sebastian, I really think you are worrying too much. It's pretty rare that it will take going all the way to omega to be able to disambiguate two dependences.
Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VA
Kai Henningsen wrote: The point is that there are two different kinds of value range calculations. You have value range information from program flow, and you have value range information from types. You want 'Valid optimization to ignore range information from types, because that's what you're checking for in the first place. On the other hand, you *want* to use range information from program flow. For example, if you just assigned a constant, you *know* the exact range of the thing. Or maybe you already passed a program point where any out-of- range value would have been caught by an implicit 'Valid. It could optimize away a lot of implicit 'Valid checks done, say, on the same variable used as an array index multiple times. Now that is certainly nontrivial to implement, and may not be worth it for gcc. But I believe it would be better than the other way. right, i agree completely with this analysis, very nice clear statement. Now how difficult is this to implement?
Re: [GCC 4.2 Project] Omega data dependence test
Sebastian Pop <[EMAIL PROTECTED]> writes: > I'd like to ask GCC users in general: how many are using these params? We use them at my current employer, mainly to remove limits which were imposed to keep compile time under control. We have code which needs to run as fast as possible, for which compile time is, by comparison, irrelevant. So we drastically bump up the values for parameters like large-function-growth, inline-unit-growth, and max-pending-list-length. And it makes a huge difference in the generated code. But if I weren't there to give advice, I have low confidence that they would know about --param at all. > Why not having instead a set of flags that limit the resources allowed > for each "unnecessary" (to be defined...) part of the compiler? For > example, I'd like a guarantee that any tree level optimizer will stop > after at most 5 seconds and at most 300M of garbage: you'd say, > -fbudget-time=5 and -fbudget-space=300M instead of having to deal with > some obscure params. I have to agree that having 69 different parameters is a lot more useful for compiler developers than it is for compiler users. Some of the parameters, like large-function-growth, are fairly easy to understand and to use, particularly in conjunction with -Winline. Some of the other parameters, like lim-expensive or max-cse-path-length, are basically meaningless to anybody who hasn't studied the compiler in depth. I have to agree that setting time and space budgets would be much more useful for users than --param (not that we should remove --param, as it is useful for compiler developers). Or there is the idea which has been suggested several times, of permitting optimization to be specified along the orthogonal axes of time of compilation, speed of generated code, size of generated code, and debuggability. (While a time budget in seconds would lead to irreproducible results, a time budget expressed in a scale from fast to slow compilation time would not, and would be nearly as useful for users.) Ian
Re: --target=v850-unknown-elf for c, c++ and java?
Hi, thanks for your hints! Sorry for being unclear in my first mail. You were right, it was the C library that was missing. As i've provided newlib now, i now got a working compiler for C, C++ and Java. Best regards, Torsten.