Invoice Copy
- This mail is in HTML. Some elements may be ommited in plain text. - Hi, I sent you an invoice through Adobe PDF. Please acknowledge. Thanks! David Sent from my iPhone Invoice (58).pdf
gcc.dg/cpp/_Pragma3.c seems broken...
With svn r117549 bootstrapped on mipsel-none-linux-gnu: The test gcc.dg/cpp/_Pragma3.c basically checks whether _Pragma3.c is not younger than the file mi1c.h in the same directory. The test fails with excess errors if this is not the case: . . . Executing on host: /home/build/gcc-build/gcc/xgcc -B/home/build/gcc-build/gcc/ /home/build/gcc/gcc/testsuite/gcc.dg/cpp/_Pragma3.c-ansi -pedantic-errors -fno-show-column -E -o _Pragma3.i(timeout = 300) /home/build/gcc/gcc/testsuite/gcc.dg/cpp/_Pragma3.c:11: warning: current file is older than mi1c.h output is: /home/build/gcc/gcc/testsuite/gcc.dg/cpp/_Pragma3.c:11: warning: current file is older than mi1c.h FAIL: gcc.dg/cpp/_Pragma3.c (test for excess errors) Excess errors: /home/build/gcc/gcc/testsuite/gcc.dg/cpp/_Pragma3.c:11: warning: current file is older than mi1c.h . . . After doing a svn checkout I have: $ ls --full-time _Pragma3.c mi1c.h -rw-r--r-- 1 root root 309 2006-10-06 23:51:13.0 -0700 _Pragma3.c -rw-r--r-- 1 root root 214 2006-10-06 23:51:14.0 -0700 mi1c.h Am I missing something here, or is the success of this test really based on the ability of svn to get both of these files within the same second? If so that seems bad. David Daney
Re: FW: How does GCC implement dynamic binding?
Michael Eager wrote: Lacefield, Greg (CNS COE) wrote: Given all this, I posed this question to the gcc mailing list and received a reply that directed me to the C++ ABI (http://codesourcery.com/cxx-abi/), which is more detailed and has the information I'm looking for. However, I need to confirm, in the case of an FAA audit, that GCC 3.3.1 implements dynamic binding in this fashion. Can anyone on the steering committee "officially" confirm that GCC uses static v-tables as described in the ABI? You should read the GPL license (http://www.gnu.org/copyleft/gpl.html) under which GCC is distributed. In particular see Paragraph 11, under the large heading "NO WARRANTY". I question whether you will find anyone who would be willing to affirm that GCC has any specific behavior if such a certification were intended to provide a guarantee or warranty, or if there was any expectation that there was any assumption of liability for GCC's failure to perform as indicated. Perhaps you are right, but it would not surprise me if there were commercial entities based around FOSS that would provide that type of support. David Daney
Re: building gcc
Bob Rossi wrote: Also, I noticed that converting it to html failed. Maybe this is a documentation error? Thanks again, Bob Rossi $ makeinfo --html ../../../gcc/gcc/doc/c-tree.texi ../../../gcc/gcc/doc/c-tree.texi:10: `Trees' has no Up field (perhaps incorrect sectioning?). makeinfo: Removing output file `/home/bob/rcs/svn/gcc/gcc/builddir/gcc/doc/c-tree/index.html' due to errors; use --force to preserve. 'make -k html' from the top level Makefile should do what you want. It starts at the root of the texi hierarchy and does not suffer from the failure you report. David Daney.
Re: r117741
>I noticed that the automake maintainers accepted your patch for > fixing the multilib issues in automake. However they also seemed to > indicate that there would be no more 1.9.x automake releases. >Is the r117741 svn checkin related to this issue? I ask because it > was unclear to me how the multilib issues with gcc could be resolved > until a new automake was released with the required patch (so that it > could be used to regenerate the necessary configure files in gcc > trunk). Thanks in advance for any clarifications. Hi, Not sure if you noticed, but automake-1.10 was released two days ago, on the 15th. The announcement made no mention of multilib issues in the summary, so you might have to dig through revision logs to hunt for the patch. >8 snip 8<- You can find the new release here: ftp://ftp.gnu.org/gnu/automake/automake-1.10.tar.gz ftp://ftp.gnu.org/gnu/automake/automake-1.10.tar.gz.sig ftp://ftp.gnu.org/gnu/automake/automake-1.10.tar.bz2 ftp://ftp.gnu.org/gnu/automake/automake-1.10.tar.bz2.sig ftp://sources.redhat.com/pub/automake/automake-1.10.tar.gz ftp://sources.redhat.com/pub/automake/automake-1.10.tar.gz.sig ftp://sources.redhat.com/pub/automake/automake-1.10.tar.bz2 ftp://sources.redhat.com/pub/automake/automake-1.10.tar.bz2.sig Finally, here are the MD5 checksums: 0e2e0f757f9e1e89b66033905860fded automake-1.10.tar.bz2 452163c32d061c53a7acc0e8c1b689ba automake-1.10.tar.gz >8 snip 8<- Fang
Re: Request for acceptance of new port (Cell SPU)
>>>>> trevor smigiel writes: trevor> We, Sony Computer Entertainment, would like to contribute a port for a trevor> new target, the Cell SPU, and seek acceptance from the Steering trevor> Committee to do so. The GCC Steering Committee welcomes the contribution of the Cell SPU port from Sony. The patch itself still needs to be reviewed for technical issues. You are free to commit the new port when the patch has been approved. Happy Hacking! David
Re: [PATCH] Fix PR29519 Bad code on MIPS with -fnon-call-exceptions
Andrew Haley wrote: Roger Sayle writes: > > Hi David, > > On Sun, 22 Oct 2006, David Daney wrote: > > 2006-10-22 Richard Sandiford <[EMAIL PROTECTED]> > > David Daney <[EMAIL PROTECTED]> > > > > PR middle-end/29519 > > * rtlanal.c (nonzero_address_p): Remove check for values wrapping. > > :REVIEWMAIL: > > This is ugly. I agree with you and Richard that this optimization > isn't safe unless we can somehow prevent the RTL optimizers from > creating the problematic RTL that they currently do. But I also > worry how much benefit some platforms get from the current unsafe > transformation, and whether there'd be an observable performance > degradation with this fix. > > I think its best to apply this patch to mainline to allow the > benchmarking folks test whether there's any change. Likewise if > someone could check whether there are any/many code generation > differences in cc1 files (or similar), that'd go some way to > silencing my potential concerns. > > Only after this has been on mainline for a while without problems > or performance issues, should we consider backporting it to the 4.2 > branch. > > Does this sound reasonable? I must admit to being a little perplexed by this. We have an unsafe optimization that causes bad code to be generated on at least one platform. However, we want to continue to perform this unsafe optimization on our release branch until we are sure that removing it doesn't cause performance regressions. And, perhaps, if removing the optimization does cause performance regressions we won't remove it, preferring bad code to reduced performance. Is that a fair summary? Perhaps I'm misunderstanding what you wrote. That would be one interpretation. I prefer: We have an unsafe optimization that causes bad code to be generated on at least one platform. One potential fix may cause performance regressions, so we will test it on the mainline for a while to see if there are any unexpectedly bad side effects. If there are none, we will commit it to 4.2 also, if there are we may try a different fix. David Daney
Re: [PATCH] Fix PR29519 Bad code on MIPS with -fnon-call-exceptions
Eric Botcazou wrote: Lots of people seem to test release branches -- probably more than mainline -- and I would hope that using the fix from this PR is by far the strongest contender. Definitely. People report bugs against released versions and expect fixes for these versions, not for versions that will be released one year from now. I think we'd be doing ourselves a favour by going with what we expect to be the final fix and getting as much testing of it as possible. After all, it's not difficult to test & apply a patch to a branch at the same time as mainline, or to revert it in the same way. Exactly my position. :-) Also, having patches on mainline and not a release branch can cause quite a bit of confusion. Witness what happend with PR 28243, where I fixed something on mainline, but it was not directly approved for a release branch. Then Eric B. worked around the same problem on the release branch and forward-ported the work-around to mainline, where it wasn't really needed. I'd have said: "fixed a subcase" but the picture is globally correct. Btw, what about backporting your fix? Or is it too late now? The patch is fully tested and ready to go for the 4.2 branch. Roger is the maintainer of the relevant parts of the compiler. When and if he approves it, I will gladly commit the patch to the branch. David Daney
Re: [PATCH] Fix PR29519 Bad code on MIPS with -fnon-call-exceptions
David Daney wrote: Eric Botcazou wrote: Lots of people seem to test release branches -- probably more than mainline -- and I would hope that using the fix from this PR is by far the strongest contender. Definitely. People report bugs against released versions and expect fixes for these versions, not for versions that will be released one year from now. I think we'd be doing ourselves a favour by going with what we expect to be the final fix and getting as much testing of it as possible. After all, it's not difficult to test & apply a patch to a branch at the same time as mainline, or to revert it in the same way. Exactly my position. :-) Also, having patches on mainline and not a release branch can cause quite a bit of confusion. Witness what happend with PR 28243, where I fixed something on mainline, but it was not directly approved for a release branch. Then Eric B. worked around the same problem on the release branch and forward-ported the work-around to mainline, where it wasn't really needed. I'd have said: "fixed a subcase" but the picture is globally correct. Btw, what about backporting your fix? Or is it too late now? The patch is fully tested and ready to go for the 4.2 branch. Most likely you were refering to your patch, and not mine. I should have realized that before sending. But my statements do hold for the PR29519 patch. Roger is the maintainer of the relevant parts of the compiler. When and if he approves it, I will gladly commit the patch to the branch. David Daney
Re: [PATCH] Fix PR29519 Bad code on MIPS with -fnon-call-exceptions
Roger Sayle wrote: On Wed, 25 Oct 2006, David Daney wrote: The patch is fully tested and ready to go for the 4.2 branch. The last thing I want is for this fix to get delayed whilst we argue over patch testing/approval policy. This fix addresses the known wrong-code issue, and at worst may replace it with missed optimization opportunity. Hence although we don't know for sure whether this is better than reverting the patch that caused the regression, it's certainly better than where 4.2 is now, and keeps us sync'd with mainline. Ok for the 4.2 branch. Sorry for the confusion. Hopefully, there'll be no more surprises. For better or worse I just committed this patch to the gcc-4_2-branch. David Daney
Where is the splitting of MIPS %hi and %lo relocations handled?
I am going to try to fix: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29721 Which is a problem where a %lo relocation gets separated from its corresponding %hi. What is the mechanism that tries to prevent this from happening? And where is it implemented? Thanks, David Daney
Re: Where is the splitting of MIPS %hi and %lo relocations handled?
Ian Lance Taylor wrote: David Daney <[EMAIL PROTECTED]> writes: I am going to try to fix: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29721 Which is a problem where a %lo relocation gets separated from its corresponding %hi. What is the mechanism that tries to prevent this from happening? And where is it implemented? This implemented by having the assembler sort the relocations so that each %lo relocations follows the appropriate set of %hi relocations. It is implemented in gas/config/tc-mips.c in append_insn. Look for reloc_needs_lo_p and mips_frob_file. At first glance the assembler does appear to handle %got correctly, so I'm not sure why it is failing for you. Did you look at the assembly fragment in the PR? Is it correct in that there is a pair of %got/%lo in the middle of another %got/%lo pair? David Daney,
Re: Where is the splitting of MIPS %hi and %lo relocations handled?
Ian Lance Taylor wrote: David Daney <[EMAIL PROTECTED]> writes: Ian Lance Taylor wrote: David Daney <[EMAIL PROTECTED]> writes: I am going to try to fix: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29721 Which is a problem where a %lo relocation gets separated from its corresponding %hi. What is the mechanism that tries to prevent this from happening? And where is it implemented? This implemented by having the assembler sort the relocations so that each %lo relocations follows the appropriate set of %hi relocations. It is implemented in gas/config/tc-mips.c in append_insn. Look for reloc_needs_lo_p and mips_frob_file. At first glance the assembler does appear to handle %got correctly, so I'm not sure why it is failing for you. Did you look at the assembly fragment in the PR? Is it correct in that there is a pair of %got/%lo in the middle of another %got/%lo pair? Sure, why not? They can be disambiguated by looking at which symbol the %got/%lo applies to. That is just what the assembler reloc sorting implements in mips_frob_file. (Or, at least, is supposed to implement, though apparently something is going wrong in your case.) The assembler sorts the relocations so that the linker always see the %lo reloc immediately after the corresponding %got reloc(s). OK, thanks. I will look at the assembler. It will not be the first binutils bug that has affected the MIPS GCC. David Daney
Re: Problem with listing i686-apple-darwin as a Primary Platform
>>>>> Eric Christopher writes: Eric> We're in stage1, breakages happen - see the current fun with gmp/mpfr as Eric> well as c99 inlining. File a bug or bring a problem up for discussion. Yes, breakage happens in Stage 1, but the goal should be no breakage. Breakage is by no means inevitable. As a consideration to other developers, breakage should be fixed or reverted as soon as possible to allow other work to proceed. Other developers and other breakage is not a valid excuse, IMHO -- problems cause by others is not a free pass. David
Re: Problem with listing i686-apple-darwin as a Primary Platform
>>>>> Eric Christopher writes: Eric> On Nov 7, 2006, at 5:24 AM, David Edelsohn wrote: >>>>>>> Eric Christopher writes: >> Eric> We're in stage1, breakages happen - see the current fun with >> gmp/mpfr as Eric> well as c99 inlining. File a bug or bring a problem up for >> discussion. >> >> Yes, breakage happens in Stage 1, but the goal should be no >> breakage. Breakage is by no means inevitable. As a consideration to >> other developers, breakage should be fixed or reverted as soon as >> possible >> to allow other work to proceed. Other developers and other breakage >> is >> not a valid excuse, IMHO -- problems cause by others is not a free >> pass. >> Eric> Well, yes, did you see anything in what I wrote that argued differently? Yes, what I quoted, the comparison with gmp/mpfr and c99 inlining. Those other problems are irrelevant. David
Re: bootstrap on powerpc fails
>>>>> Kaveh R GHAZI writes: Kaveh> I tried many years ago and Mark objected: Kaveh> http://gcc.gnu.org/ml/gcc-patches/2000-10/msg00756.html Kaveh> Perhaps we could take a second look at this decision? The average system Kaveh> has increased in speed many times since then. (Although sometimes I feel Kaveh> like bootstrapping time has increased at an even greater pace than chip Kaveh> improvements over the years. :-) I object. David
Re: Has anyone seen mainline Fortran regression with SPEC CPU 2000/2006?
>>>>> Steve Kargl writes: Steve> I have not seen this failure, but that may be expected Steve> since SPEC CPU 2000 isn't freely available. No failure should be expected. It is a bug and a regression and should be fixed, with help of users who have access to SPEC CPU2000. David
New Type of GCC Maintainer
GCC has increased in size, scope, and complexity, but the number of maintainers has not scaled commensurately. While there is a need for more reviewers, there also is a concern of too many maintainers stepping on one another and GCC development becoming more chaotic. After a lot of brain-storming and discussion with current maintainers and after considering many options, the GCC Steering Committee has voted to create a new type of maintainer: non-algorithmic maintainer. A non-algorithmic change is one which maintains an overall algorithm and does not introduce new functionality, but may change implementation details. Non-Algorithmic maintainers can commit and review bug fix patches written by themselves or by other developers, such as patches appropriate for GCC Development Stage 3, and should help general maintainers with recommendations about other patches. In other words, general maintainers effectively are technical architects for their area of the compiler. The term "algorithmic change" is somewhat ambiguous and the SC has decided not to provide a formal definition that could overly-constrain maintainers. This is not a language standard inviting developers to parse words. Instead, any other maintainer with authority over the area unilaterally may revert a patch approved or committed by a non-algorithmic maintainer, if necessary. Hopefully the maintainers would resolve the concern through discussion without having to invoke this provision. Let the experiment begin ... Happy Hacking, David
Zdenek Dvorak and Daniel Berlin appointed loop optimizer maintainers
I am pleased to announce that the GCC Steering Committee has appointed Zdenek Dvorak and Daniel Berlin as non-algorithmic maintainers of the RTL and Tree loop optimizer infrastructure in GCC. Please join me in congratulating Zdenek and Daniel on their new role. Zdenek and Daniel, please update your listings in the MAINTAINERS file. Happy hacking! David
Re: regenerating reliably GCC configure files
> I am not sure to understand what is the *reliable* way to regenerate > GCC configure files from the real (human typed) master source files > (like Makefile.in, gcc/configure.ac, Makefile.tpl, etc...) > > I made some suggestions on the Wiki > > http://gcc.gnu.org/wiki/AboutGCCConfiguration > > > Again, feel free to edit the above page (and/or incorporate parts of > it into the documentation). Hi, My initial reaction is "Why title the page \"About GCC Configuration\"" when the first line says it's about something else. I recommend a title like "... Configure auto-generation" or "... regeneration". You can still explain the distinction in the introduction, since people still often mistake the procedures. Some links to download the required versions of the autotools might save some time for people getting started quickly. This looks like a good start so far. Fang
Re: Zdenek Dvorak and Daniel Berlin appointed loop optimizer maintainers
>>>>> Zdenek Dvorak writes: Zdenek> thank you. What exactly does "non-algorithmic" mean in this context? Please see the immediately previous announcement to the GCC mailinglist of non-algorithmic maintainers. David
Re: Zdenek Dvorak and Daniel Berlin appointed loop optimizer maintainers
>>>>> Richard Guenther writes: Richard> I would rather open a new section if this idiom is supposed to spread more ;-) The plan is to appoint more developers as Non-Algorithmic maintainers. David
Re: regenerating reliably GCC configure files
> > > I made some suggestions on the Wiki > > > > > > http://gcc.gnu.org/wiki/AboutGCCConfiguration > > > > > > Again, feel free to edit the above page (and/or incorporate parts of > > > it into the documentation). Looks like it was relocated to: http://gcc.gnu.org/wiki/Regenerating_GCC_Configuration Fang
Re: PowerPC code gen question
>>>>> Michael Eager writes: Michael> Can someone explain to me why FP regs should contain int values? Michael> Is this to support the fcfid conversion instruction? Yes, for FP conversion instructions. Michael> What keeps FP regs from being used to contain integer values? Nothing. However, the instructions that operate on integer values in FPRs use register alternative modifiers that discourage GCC's register allocation passes from placing integer values there unless they already are there. That is what the "*" means in "*f". David
Re: build gcc with distcc
Hi, > > My question is: how to build gcc bootstrap with distcc correctly. > > I believe it is impossible in the general case. bootstrap means to > compile GCC source code with a GCC compiler just built from the same > source code. Hence, to distribute this compilation with distcc, you'll > need to send the (stage1 compiled) GCC binaries on the network, and > distcc is not able to send binary programs. (If you'll extend distcc > to send binary programs - a non trivial task -, you open a new can of > worms, and you'll need to tackle potential security issues). Not impossible, you'd just need to have the different machines see/mount the same file-system for starters. That would result in coherent binaries, no shipping required. I've done this on an ancient x86 FreeBSD cluster (8 to 16 machines) mounting a single file server before with no hiccups. (That was gcc-3.4.0, ccache 2.2, distcc 2.13, both old). > So a distributed build has to also distribute files, for example by > sharing a common distributed file system. OF course details are > complex. On a somewhat related note, I'd be interested to hear if ccache could be snuck into bootstrapping to speed up recompiles in the intermediate stages, especially with incremental changes. (Anyone tried this?) I've noted that ccache-ing only speeds up the first stage, as one would expect. There might be a subtle issue with ccache assuming that the compiler that created a cache-hit object did not change. I'm only aware of ccache verifying compiler versions (string compare) in the hit-check, which alone doesn't suffice to guarantee that the cache is (or should be) hit. *sigh* Bootstrapping on me 5+ yr. old dual-G4 takes quite a while, even with make -j2 (which helps a lot). Wish-list: gcj-ccache for classpath rebuild acceleration. Fang
Re: bootstrapping r118945 failed
Reported (and confirmed) here: http://gcc.gnu.org/bugzilla//show_bug.cgi?id=29879 > SVN revision: 118945 > Host: i686-pc-linux-gnu > > /home/daniel/svn-build/gcc-head/./gcc/xgcc > -B/home/daniel/svn-build/gcc-head/./g > cc/ -B/home/daniel/i686-pc-linux-gnu/gcc-svn//i686-pc-linux-gnu/bin/ > -B/home/dan > iel/i686-pc-linux-gnu/gcc-svn//i686-pc-linux-gnu/lib/ -isystem > /home/daniel/i686 > -pc-linux-gnu/gcc-svn//i686-pc-linux-gnu/include -isystem > /home/daniel/i686-pc-l > inux-gnu/gcc-svn//i686-pc-linux-gnu/sys-include -DHAVE_CONFIG_H -I. > -I/home/dani > el/svn/gcc/libgfortran -I. -iquote/home/daniel/svn/gcc/libgfortran/io > -I/home/da > niel/svn/gcc/libgfortran/../gcc > -I/home/daniel/svn/gcc/libgfortran/../gcc/config > -I../.././gcc -D_GNU_SOURCE -std=gnu99 -Wall -Wstrict-prototypes > -Wmissing-prot > otypes -Wold-style-definition -Wextra -Wwrite-strings -ftree-vectorize > -funroll- > loops -O2 -g -O2 -c /home/daniel/svn/gcc/libgfortran/generated/matmul_i4.c > -fPI > C -DPIC -o .libs/matmul_i4.o > /home/daniel/svn/gcc/libgfortran/generated/matmul_i4.c: In > function 'matmul_i4': > /home/daniel/svn/gcc/libgfortran/generated/matmul_i4.c:337: error: > verify_flow_i > nfo: Block 136 has loop_father, but there are no loops > /home/daniel/svn/gcc/libgfortran/generated/matmul_i4.c:337: error: > verify_flow_info: Block 135 has loop_father, but there are no loops > > [snipped 133 identical messages] > > /home/daniel/svn/gcc/libgfortran/generated/matmul_i4.c:337: error: > verify_flow_info: Block 2 has loop_father, but there are no loops > /home/daniel/svn/gcc/libgfortran/generated/matmul_i4.c:337: internal compiler > error: verify_flow_info failed > Please submit a full bug report, > with preprocessed source if appropriate. > See http://gcc.gnu.org/bugs.html> for instructions. > make[3]: *** [matmul_i4.lo] Error 1 > make[3]: Leaving directory > `/home/daniel/svn-build/gcc-head/i686-pc-linux-gnu/libgfortran' > make[2]: *** [all] Error 2 > make[2]: Leaving directory > `/home/daniel/svn-build/gcc-head/i686-pc-linux-gnu/libgfortran' > make[1]: *** [all-target-libgfortran] Error 2 > make[1]: Leaving directory `/home/daniel/svn-build/gcc-head'
Re: build gcc with distcc
> > There might be a subtle issue with ccache assuming that the compiler > > that created a cache-hit object did not change. I'm only aware of > > ccache verifying compiler versions (string compare) in the > > hit-check, which alone doesn't suffice to guarantee that the cache > > is (or should be) hit. > > No, it records the timestamp of the compiler driver. These would > always be different in a bootstrap run, so you would never get any > cache hits for second and third stage object files. Ah, wasn't sure that it did that, but thanks for pointing that out. Bummer. "No ccache for you!!!" Fang
Re: 32bit Calling conventions on linux/ppc.
>>>>> Joslwah writes: Joslwah> Looking at the Linux 32bit PowerPC ABI spec, it appears to me that Joslwah> floats in excess of those that are passed in registers are supposed to Joslwah> be promoted to doubles and passed on the stack. Examing the resulting Joslwah> stack from a gcc generated C call it appears they are passed as Joslwah> floats. Joslwah> Can someone confirm/refute this, or else point me to an ABI that says Joslwah> that they should be passed as floats. The SVR4 PowerPC ABI Supplement does seem to imply that floats should be passed in the stack as doubles. The PowerPC Linux ABI is not identical to the SVR4 PPC ABI. I am not sure what benefit might be gained by promoting floats passed on the stack to double. David
Re: powerpc64-gnu libgcc?
>>>>> Daniel Jacobowitz writes: Dan> In updating toplevel libgcc, I noticed this in t-ppccomm: Dan> ifneq (,$findstring gnu,$(target)) Dan> ... -mlong-double-128 ... Dan> This suggests it was supposed to apply to both GNU/Linux and GNU/Hurd, but Dan> no other PowerPC targets. Is that right? If so, I assume it's safe to fix Dan> it to do so. It doesn't quite at present; powerpc64-gnu does not include Dan> t-ppccomm. powerpc-gnu does. Yes, I believe the intent was to apply the flag to all GNU-based PowerPC ABIs, but not SVR4 classic or eABI. David
Re: Cannot call pure virtual function from base class constructor.
Hi, I've found that this article explains very well why one *cannot* call a virtual function during construction (or destruction). http://www.artima.com/cppsource/nevercall.html HTH, Fang > I have created a base class who's constructor calls a pure virtual > function I derive this class, implement the function, compile and > receive this error message, "error: abstract virtual `IDXTYPE > DataSet::indexFxn(uint) [with DATA = float, IDXTYPE = > float]' called from constructor" > > Without context to my code basically reads, "error: abstract virtual > function called from constructor" > > I know that it would not make sense for the base class to call a pure > virtual function. However, this constructor is being called in > response to a derived class of the base being initialized who defines > the pure virtual function of the base. > > I can see how this error may happen if the constructor for the base > class is called before the derived class at which time the pointer to > the derived class' implementation of the base class' virtual function > is not initialized (I don't know for sure if this is the order of what > happens but it is my best educated guess). I don't see that this error > is necessary, unless the derived class does not implement the pure > virtual function of the base. Even if this were the case, the derived > class could initialize the pointer table with it's implementation for > the base class pure virtual function before the base class constructor > call to the pure virtual function? > > Is this behaviour a cause of the C++ standard or is it specific to GCC? David Fang Computer Systems Laboratory Electrical & Computer Engineering Cornell University http://www.csl.cornell.edu/~fang/ -- (2400 baud? Netscape 3.0?? lynx??? No problem!)
Bootstrap broken on x86_64 on the trunk in libgfortran?
Platform is x86_64 (FC6) with trunk r119257 Configured thusly: ../trunk/configure --with-gmp=/usr/local --with-mpfr=/usr/local --disable-multilib I am getting this while in stage3 when I bootstrap: /bin/sh ../../../trunk/libgfortran/mk-kinds-h.sh '/home/daney/gccsvn/native-trunk/./gcc/gfortran -B/home/daney/gccsvn/native-trunk/./gcc/ -B/usr/local/x86_64-unknown-linux-gnu/bin/ -B/usr/local/x86_64-unknown-linux-gnu/lib/ -isystem /usr/local/x86_64-unknown-linux-gnu/include -isystem /usr/local/x86_64-unknown-linux-gnu/sys-include -I . -Wall -fno-repack-arrays -fno-underscoring ' > kinds.h || rm kinds.h ../../../trunk/libgfortran/mk-kinds-h.sh: Unknown type grep '^#' < kinds.h > kinds.inc /bin/sh: kinds.h: No such file or directory make[2]: *** [kinds.inc] Error 1 make[2]: Leaving directory `/home/daney/gccsvn/native-trunk/x86_64-unknown-linux-gnu/libgfortran' make[1]: *** [all-target-libgfortran] Error 2 make[1]: Leaving directory `/home/daney/gccsvn/native-trunk' make: *** [all] Error 2 Is anyone else seeing this, or do you have any pointer as to how it might be fixed (other than disabling fortran)? Thanks, David Daney
Re: Bootstrap broken on x86_64 on the trunk in libgfortran?
Andrew Pinski wrote: Platform is x86_64 (FC6) with trunk r119257 Configured thusly: ../trunk/configure --with-gmp=/usr/local --with-mpfr=/usr/local --disable-multilib I am getting this while in stage3 when I bootstrap: /bin/sh ../../../trunk/libgfortran/mk-kinds-h.sh '/home/daney/gccsvn/native-trunk/./gcc/gfortran -B/home/daney/gccsvn/native-trunk/./gcc/ -B/usr/local/x86_64-unknown-linux-gnu/bin/ -B/usr/local/x86_64-unknown-linux-gnu/lib/ -isystem /usr/local/x86_64-unknown-linux-gnu/include -isystem /usr/local/x86_64-unknown-linux-gnu/sys-include -I . -Wall -fno-repack-arrays -fno-underscoring ' > kinds.h || rm kinds.h ../../../trunk/libgfortran/mk-kinds-h.sh: Unknown type grep '^#' < kinds.h > kinds.inc /bin/sh: kinds.h: No such file or directory make[2]: *** [kinds.inc] Error 1 make[2]: Leaving directory `/home/daney/gccsvn/native-trunk/x86_64-unknown-linux-gnu/libgfortran' make[1]: *** [all-target-libgfortran] Error 2 make[1]: Leaving directory `/home/daney/gccsvn/native-trunk' make: *** [all] Error 2 Usually (like 99% of the time), this means you GMP/MPFR are broken and is causing gfortran to crash out. You might want to try running mk-kinds-h.sh to see what the error is? Thanks Andrew. That was the problem. I had inadvertently left my LD_LIBRARY_PATH unset, so it was probably using the system libraries instead of the special GCC versions. This being my first ever x86_64 build, I am still working some kinks out of the process. David Daney
Re: [RFC] timers, pointers to functions and type safety
Andrew Pinski wrote: On Fri, 2006-12-01 at 17:21 +, Al Viro wrote: There's a bunch of related issues, some kernel, some gcc, thus the Cc from hell on that one. I don't really see how this is a GCC question, rather I see this as a C question which means this should have gone to either [EMAIL PROTECTED] or the C news group. . . . PS don't cross post and I still don't see a GCC development question in here, only a C one. Andrew, There are times when we should cut people a little slack. If not for the sake of general harmony, then at least to facilitate the improvement of two important programs (the Linux kernel and GCC). There is a lot more that could be said, but I will leave it at that, David Daney
Richard Guenther appointed middle-end maintainer
I am pleased to announce that the GCC Steering Committee has appointed Richard Guenther as non-algorithmic middle-end maintainer. Please join me in congratulating Richi on his new role. Richi, please update your listings in the MAINTAINERS file. Happy hacking! David
Re: Optimizing a 16-bit * 8-bit -> 24-bit multiplication
here's an ignorant, naive, and very likely wrong attempt: what happens if you mask off the high and low bytes of the larger number, do two 8,8->16 multiplies, left shift the result of the result of the higher one, and add, as a macro? #define _mul8x16(c,s) ( \ (long int) ((c) * (unsigned char) ( (s) & 0x00FF) )\ + (long int) ( \ (long int) ( (c) * (unsigned char) ( ( (s) & 0xFF00 ) >> 8)) \ << 8 \ ) \ ) what would that do? I don't know which if any of the casting is needed, or what exactly you have to do to suppress upgrading the internal representations to 32-bit until the left-shift and the add; one would expect that multiplying a char by a short on this platform would produce that code, and that the avr-gcc-list would be the right place to find someone who could make that happen. since you know the endianness of your machine, you could reasonably pull chars out of the long directly instead of shifting and masking, also you could store the to-be-shifted result directly into an address one byte off from the address of the integer that you will add the low result to -- that's what you're proposing unions for, right? On 12/1/06, Shaun Jackman <[EMAIL PROTECTED]> wrote: I would like to multiply a 16-bit number by an 8-bit number and produce a 24-bit result on the AVR. The AVR has a hardware 8-bit * 8-bit -> 16-bit multiplier. If I multiply a 16-bit number by a 16-bit number, it produces a 16-bit result, which isn't wide enough to hold the result. If I cast one of the operands to 32-bit and multiply a 32-bit number by a 16-bit number, GCC generates a call to __mulsi3, which is the routine to multiply a 32-bit number by a 32-bit number and produce a 32-bit result and requires ten 8-bit * 8-bit multiplications. A 16-bit * 8-bit -> 24-bit multiplication only requires two 8-bit * 8-bit multiplications. A 16-bit * 16-bit -> 32-bit multiplication requires four 8-bit * 8-bit multiplications. I could write a mul24_16_8 (16-bit * 8-bit -> 24-bit) function using unions and 8-bit * 8-bit -> 16-bit multiplications, but before I go down that path, is there any way to coerce GCC into generating the code I desire? Cheers, Shaun -- perl -le'1while(1x++$_)=~/^(11+)\1+$/||print'
Re: 32 bit jump instruction.
Rohit Arul Raj wrote: Hi all, I am working on a private target where jump instruction patterns are similiar to this jmp <24 bit offset> jmp for 32 bit offsets if my offset is greater than 24 bits, then i have to move the offset to an address register. But inside the branch instruction (in md file), i am not able to generate a pseudo register because the condition check for "no_new_pseudos " fails. Can any one suggest a way to overcome this? This is similar to how the MIPS works. Perhaps looking at its implementation would be useful. David Daney
Bootstrap broken on mipsel-linux...
From svn r119726 (Sun, 10 Dec 2006) I am getting an ICE during bootstrap on mipsel-linux. This is a new failure since Wed Dec 6 06:34:07 UTC 2006 (revision 119575) which bootstrapped and tested just fine. I don't really want to do a regression hunt as bootstraps take 3 or 4 days for me. I will update and try it again. Configured as: ../gcc/configure --with-arch=mips32 --with-float=soft --disable-java-awt --without-x --disable-tls --enable-__cxa_atexit --disable-jvmpi --disable-static --disable-libmudflap --enable-languages=c,c++,java The bootstrap compiler is GCC-4.0.2 In stage 2 (i.e. with the stage 1 compiler) I get an ICE: /home/build/gcc-build/./prev-gcc/xgcc -B/home/build/gcc-build/./prev-gcc/ -B/usr/local/mipsel-unknown-linux-gnu/bin/ -c -g -O2 -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -Wold-style-definition -Wmissing-format-attribute -Werror -fno-common -DHAVE_CONFIG_H -I. -I. -I../../gcc/gcc -I../../gcc/gcc/. -I../../gcc/gcc/../include -I../../gcc/gcc/../libcpp/include -I../../gcc/gcc/../libdecnumber -I../libdecnumber ../../gcc/gcc/c-decl.c -o c-decl.o ../../gcc/gcc/c-decl.c: In function 'set_type_context': ../../gcc/gcc/c-decl.c:691: internal compiler error: in cse_find_path, at cse.c:5930 Please submit a full bug report, with preprocessed source if appropriate. #0 fancy_abort (file=0x101f2a0 "../../gcc/gcc/cse.c", line=5930, function=0x101f8bc "cse_find_path") at ../../gcc/gcc/diagnostic.c:642 #1 0x007dfe00 in cse_find_path (first_bb=0x2bc5e7e0, data=0x7ff0775c, follow_jumps=1) at ../../gcc/gcc/cse.c:5930 #2 0x007e10f8 in cse_main (f=0x2b9126e0, nregs=238) at ../../gcc/gcc/cse.c:6209 #3 0x007e37f8 in rest_of_handle_cse () at ../../gcc/gcc/cse.c:6967 #4 0x00c7aab0 in execute_one_pass (pass=0x10e6210) at ../../gcc/gcc/passes.c:858 #5 0x00c7acc4 in execute_pass_list (pass=0x10e6210) at ../../gcc/gcc/passes.c:902 #6 0x00c7acf8 in execute_pass_list (pass=0x10e7730) at ../../gcc/gcc/passes.c:903 #7 0x005b0e74 in tree_rest_of_compilation (fndecl=0x2b716c98) at ../../gcc/gcc/tree-optimize.c:463 #8 0x0046e430 in c_expand_body (fndecl=0x2b716c98) at ../../gcc/gcc/c-decl.c:6855 #9 0x00d343e4 in cgraph_expand_function (node=0x2b71e000) at ../../gcc/gcc/cgraphunit.c:1238 #10 0x00d347ac in cgraph_expand_all_functions () at ../../gcc/gcc/cgraphunit.c:1303 #11 0x00d35980 in cgraph_optimize () at ../../gcc/gcc/cgraphunit.c:1582 #12 0x00472f14 in c_write_global_declarations () at ../../gcc/gcc/c-decl.c:7968 #13 0x00be1f18 in compile_file () at ../../gcc/gcc/toplev.c:1040 #14 0x00be4ca0 in do_compile () at ../../gcc/gcc/toplev.c:2089 #15 0x00be4da0 in toplev_main (argc=44, argv=0x7ff07b14) at ../../gcc/gcc/toplev.c:2121 #16 0x0055a554 in main (argc=44, argv=0x7ff07b14) at ../../gcc/gcc/main.c:35 (gdb) up #1 0x007dfe00 in cse_find_path (first_bb=0x2bc5e7e0, data=0x7ff0775c, follow_jumps=1) at ../../gcc/gcc/cse.c:5930 5930 gcc_assert (!TEST_BIT (cse_visited_basic_blocks, bb2->index)); (gdb) l 5925 basic_block bb2 = e->dest; 5926 5927#if ENABLE_CHECKING 5928 /* We should only see blocks here that we have not 5929 visited yet. */ 5930 gcc_assert (!TEST_BIT (cse_visited_basic_blocks, bb2->index)); 5931#endif 5932 SET_BIT (cse_visited_basic_blocks, bb2->index); 5933 data->path[path_size++].bb = bb2; 5934 bb = bb2;
Re: Bootstrap broken on mipsel-linux...
Steven Bosscher wrote: On 12/11/06, David Daney <[EMAIL PROTECTED]> wrote: From svn r119726 (Sun, 10 Dec 2006) I am getting an ICE during bootstrap on mipsel-linux. This is a new failure since Wed Dec 6 06:34:07 UTC 2006 (revision 119575) which bootstrapped and tested just fine. I don't really want to do a regression hunt as bootstraps take 3 or 4 days for me. I will update and try it again. No need. It's my CSE patch, no doubt: http://gcc.gnu.org/ml/gcc-patches/2006-12/msg00698.html I'll try to figure out what's wrong. /home/build/gcc-build/./prev-gcc/xgcc -B/home/build/gcc-build/./prev-gcc/ -B/usr/local/mipsel-unknown-linux-gnu/bin/ -c -g -O2 -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -Wold-style-definition -Wmissing-format-attribute -Werror -fno-common -DHAVE_CONFIG_H -I. -I. -I../../gcc/gcc -I../../gcc/gcc/. -I../../gcc/gcc/../include -I../../gcc/gcc/../libcpp/include -I../../gcc/gcc/../libdecnumber -I../libdecnumber ../../gcc/gcc/c-decl.c -o c-decl.o ../../gcc/gcc/c-decl.c: In function 'set_type_context': ../../gcc/gcc/c-decl.c:691: internal compiler error: in cse_find_path, at cse.c:5930 Please submit a full bug report, with preprocessed source if appropriate. Sic :-) A test case would be helpful. Lets assume that it doesn't effect i686 or x86_64. Because if it did, someone else would have been hit by it by now. So you would need a mips[el]-linux system in order to reproduce it. But if you had that, you could compile c-decl.c yourself to reproduce it. But if you really want it, I can get you a preprocessed version of c-decl.c. I suppose one could try it on a cross-compiler, but I have no idea if that would fail in the same manner. David Daney
Re: 32bit Calling conventions on linux/ppc.
>>>>> Joslwah writes: Joslwah> Looking at the Linux 32bit PowerPC ABI spec, it appears to me that Joslwah> floats in excess of those that are passed in registers are supposed to Joslwah> be promoted to doubles and passed on the stack. Examing the resulting Joslwah> stack from a gcc generated C call it appears they are passed as Joslwah> floats. Joslwah> Can someone confirm/refute this, or else point me to an ABI that says Joslwah> that they should be passed as floats. I have not been able to find any motivation for promoting floats passed ont the stack. Does this provide some form of compatibility with SPARC? David
Re: 32bit Calling conventions on linux/ppc.
>>>>> Dale Johannesen writes: Dale> It may have been intended to allow the callee to be a K&R-style or Dale> varargs function, where all float args get promoted to double. Dale> In particular, printf was often called without being declared in K&R- Dale> era code. This is one way to make that code work in a C90 environment. Except that arguments in registers are not promoted and arguments in registers spilled to the stack for varargs are not promoted. In fact it makes varargs more complicated. And it does not really match K&R promotion rules. David
Re: why no boehm-gc tests?
Jack Howarth wrote: I noticed that boehm-gc check doesn't work from within the dejagnu framework. According to the notes in PR11412, this was going to be fixable once the multi-lib stuff was moved to the top level. I assume this has happened by now so can we fix this for gcc 4.2? Jack You should probably target the trunk first. Then after the patch is proven there a backport could be considered under the the branch commit criteria. David Daney
Eric Christopher appointed Darwin maintainer
I am pleased to announce that the GCC Steering Committee has appointed Eric Christopher as Darwin co-maintainer. Please join me in congratulating Eric on his new role. Eric, please update your listings in the MAINTAINERS file. Happy hacking! David
Re: Built and installed gcc on powerpc-ibm-aix5.3.0.0
>>>>> [EMAIL PROTECTED] net writes: jonathan> Configured with: ./configure --with-as=/usr/bin/as jonathan> Had to export DESTDIR for make install in gcc objdir jonathan> to work Thanks for the notification. You will have less problems building and installing GCC if you do not build it in the source directory. David
Re: GCC optimizes integer overflow: bug or feature?
On 12/20/06, Marcin Dalecki <[EMAIL PROTECTED]> wrote: You are apparently using a different definition of an algebra or ring than the common one. Fascinating discussion. Pointers to canonical on-line definitions of the terms "algebra" and "ring" as used in compiler design please?
Re: GCC optimizes integer overflow: bug or feature?
On 12/20/06, Marcin Dalecki <[EMAIL PROTECTED]> wrote: You better don't. Really! Please just realize for example the impact of the (in)famous 80 bit internal (over)precision of a very common IEEE 754 implementation... volatile float b = 1.; if (1. / 3. == b / 3.) { printf("HALLO!\n") } else { printf("SURPRISE SURPRISE!\n"); } It has always seemed to me that floating point comparison could be standardized to regularize the exponent and ignore the least significant few bits and doing so would save a lot of headaches. Would it really save the headaches or would it just make the cases where absolute comparisons of fp results break less often, making the error more intermittent and thereby worse? Could a compiler switch be added that would alter fp equality? I have argued for "precision" to be included in numeric types in other forae and have been stunned that all except people with a background in Chemistry find the suggestion bizarre and unnecessary; I realize that GCC is not really a good place to try to shift norms; but on the other hand if a patch was to be prepared that would add a command-line switch (perhaps -sloppy-fpe and -no-sloppy-fpe) that would govern wrapping ((fptype) == (fptype)) with something that threw away the least sig. GCC_SLOPPY_FPE_SLOP_SIZE bits in the mantissa, would it get accepted or considered silly?
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
Paul Eggert wrote: If memory serves K&Rv1 didn't talk about overflow, yes. My K&R V1 says in Appendix A (C Reference Manual) Section 7: . . . The handling of overflow and divide check in expression evaluation is machine-dependent. All existing implementations of C ignore integer overflows; treatment of division by 0, and all floating-point exceptions, varies between machines, and is usually adjustable by a library function. In chapter 2, section 2.5 it basically says the same thing. Those are the only two places the index indicates that 'overflow' is described. David Daney
Re: Build snapshots according to a more regular schedule
Are 4.0 snapshots still necessary? I suspect they should be discontinued. David
Re: gcc 3.4 > mainline performance regression
>>>>> Steven Bosscher writes: Steven> What does the code look like if you compile with -O2 -fgcse-sm? Yep. Mark and I recently discussed whether gcse-sm should be enabled by default at some optimization level. We're hiding performance from GCC users. David
Re: Build snapshots according to a more regular schedule
> > > > Are 4.0 snapshots still necessary? I suspect they should be > > > > discontinued. > > > > > > 4.0 still seems to be regarded as an active branch. > > > > > > I don't mind closing it, myself. Does anybody think we should have a > > > 4.0.4 release? > > > > I'd like to see it closed. We have some bugs that are only open because > > they are targeted for 4.0.4 (fixed on all branches but 4_0). > > I'd like to see it closed, too, all Linux/BSD vendors I know of are either > still using 3.x or have switched to 4.1 already. Hi, User chiming in: before retiring 4.0, one would be more easily convinced to make a transition to 4.1+ if the regressions from 4.0 to 4.1 numbered fewer. In the database, I see only 79 (P3+) regressions in 4.1 that are not in 4.0 (using only summary matching). Will these get a bit more attention for the upcoming 4.1.2 release? http://gcc.gnu.org/bugzilla/query.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=4.1&known_to_fail_type=allwordssubstr&known_to_work_type=allwordssubstr&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&gcchost_type=allwordssubstr&gcchost=&gcctarget_type=allwordssubstr&gcctarget=&gccbuild_type=allwordssubstr&gccbuild=&keywords_type=allwords&keywords=&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=SUSPENDED&bug_status=WAITING&bug_status=REOPENED&priority=P1&priority=P2&priority=P3&emailtype1=substring&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=&chfieldto=Now&chfieldvalue=&query_based_on=4.1%20%5C%204.0%20regressions&negate0=1&field0-0-0=short_desc&type0-0-0=substring&value0-0-0=4.0&field0-1-0=noop&type0-1-0=noop&value0-1-0= Fang
Jan Hubicka and Uros Bizjak appointed i386 maintainers
I am pleased to announce that the GCC Steering Committee has appointed Jan Hubicka and Uros Bizjak as co-maintainers of the i386 port. Please join me in congratulating Jan and Uros on their new role. Jan and Uros, please update your listings in the MAINTAINERS file. Happy hacking! David
Re: debugging capabilities on AIX ?
>>>>> Olivier Hainque writes: Olivier> Working on GCC 4 based GNAT port for AIX 5.[23], our testsuite to Olivier> evaluate GDB (6.4) debugging capabilities currently yields very Olivier> unpleasant results compared to what we obtain with a GCC 3.4 based Olivier> compiler (80+ extra failures out of 1800+ tests). Olivier> We so far presumed that this is caused by limitations in the Olivier> XCOFF/STABS debug info format more heavily exposed by the many great Olivier> compiler improvements between 3.4 and 4.x. Olivier> I'd appreciate feedback on general questions from these observations: Olivier> Is it generally known/expected that xcoff/stabs debugging capabilities Olivier> degrade when moving from 3.4 to 4.x ? Yes. I recompile files without optimization for debugging to disable the transformations that confuse debugging. Olivier> If yes, how is that considered by AIX GCC developers ? (how serious the Olivier> issue, is it fixable, are there plans/attempts to move to DWARF2, ...) The reaction varies with developer. AIX continues to use xcoff/stabs. The feedback of AIX users to IBM sales representatives and executives will determine the response. David
Re: Miscompilation of remainder expressions
Robert Dewar wrote: Roberto Bagnara wrote: Reading the thread "Autoconf manual's coverage of signed integer overflow & portability" I was horrified to discover about GCC's miscompilation of the remainder expression that causes INT_MIN % -1 to cause a SIGFPE on CPUs of the i386 family. Are there plans to fix this bug (which, to me, looks quite serious)? Seems ultra-non-serious to me, hard to believe this case appears in real code, despite surprising claim by Roberto. It seems to me that you would know that it is happening. If it hits you, you don't get hard to find wierd program results. Your program is killed by SIGFPE. David Daney
Re: Miscompilation of remainder expressions
Roberto Bagnara wrote: Robert Dewar wrote: Roberto Bagnara wrote: Reading the thread "Autoconf manual's coverage of signed integer overflow & portability" I was horrified to discover about GCC's miscompilation of the remainder expression that causes INT_MIN % -1 to cause a SIGFPE on CPUs of the i386 family. Are there plans to fix this bug (which, to me, looks quite serious)? All the best, Roberto P.S. I checked whether this bug affects my code and it does. Before yesterday I was completely unsuspecting of such a fundamental flaw... I wonder how many know about it. It's truly amazing for real code to be computing remainders in this domain ... seems a bad idea to me, since very few people are comfortably aware of what remainder means for such cases. Everyone knows that dividing a number by -1 or 1 gives a 0 remainder. To the contrary, no one expects a%b to raise SIFPE when b != 0. On the contrary, since the beginning of time SIGFPE has been generated on GCC/x86/linux under these conditions. This is wildly known. Just because you just found out about it does not mean that 'no one' expects it. David Daney
Re: Miscompilation of remainder expressions
Roberto Bagnara wrote: Hmmm, it says nothing about the remainder. Can some Google guru suggest how to prove or disprove the claim that what we are talking about is wildly known? The point really is not how widely/wildly known the issue is. Really the thing we consider on gcc@ is: What is the 'best' thing for GCC and the GCC developers to do. I don't claim to speak for others, but until now this issue has not seemed all that pressing. And it still doesn't. David Daney
Re: Miscompilation of remainder expressions
Vincent Lefevre wrote: On 2007-01-16 12:31:00 -0500, Robert Dewar wrote: Roberto Bagnara wrote: Reading the thread "Autoconf manual's coverage of signed integer overflow & portability" I was horrified to discover about GCC's miscompilation of the remainder expression that causes INT_MIN % -1 to cause a SIGFPE on CPUs of the i386 family. Are there plans to fix this bug (which, to me, looks quite serious)? Seems ultra-non-serious to me, hard to believe this case appears in real code, despite surprising claim by Roberto. What makes you think so? One never knows. We (the MPFR developers) found several compiler bugs concerning particular cases like that, which occurred in MPFR. One of them (in some gcc version) was 0 + LONG_MIN, which was different from LONG_MIN. Is 0 + LONG_MIN so different from INT_MIN % -1, for instance? The difference is that your program didn't get killed by SIGFPE, it just gave incorrect results. David Daney
Re: Miscompilation of remainder expressions
Andrew Haley wrote: Ian Lance Taylor writes: > Joe Buck <[EMAIL PROTECTED]> writes: > > > I suggest that those who think this is a severe problem are the > > ones who are highly motivated to work on a solution. An > > efficient solution could be tricky: you don't want to disrupt > > pipelines, or interfere with optimizations that rely on > > recognizing that there is a modulo. > > I suspect that the best fix, in the sense of generating the best > code, would be to do this at the tree level. That will give loop > and VRP optimizations the best chance to eliminate the test for -1. > Doing it during gimplification would be easy, if perhaps rather > ugly. If there are indeed several processors with this oddity, > then it would even make a certain degree of sense as a > target-independent option. x86, x86-64, S/390, as far as I'm aware. MIPS does *not* seem to suffer from this 'defect', so a target independent solution that caused MIPS to generate worse code would be bad. David Daney
Re: Miscompilation of remainder expressions
Andrew Haley wrote: Ian Lance Taylor writes: > Gabriel Dos Reis <[EMAIL PROTECTED]> writes: > > > Ian, do you believe something along the line of > > > > # > I mean, could not we generate the following for "%": > > # > > > # > rem a b := > > # > if abs(b) == 1 > > # > return 0 > > # > return a b > > # > > # On x86 processors that have conditional moves, why not do the equivalent > > # of > > # > > # neg_b = -b; > > # cmov(last result is negative,neg_b,b) > > # __machine_rem(a,b) > > # > > # Then there's no disruption of the pipeline. > > > > is workable for the affected targets? > > Sure, I think the only real issue is where the code should be > inserted. From a performance/convenience angle, the best place to handle this is either libc or the kernel. Either of these can quite easily fix up the operands when a trap happens, with zero performance degradation of existing code. I don't think there's any need for gcc to be altered to handle this. That only works if the operation causes a trap. On x86 this is the case, but Andrew Pinski told me on IM that this was not the case for PPC. David Daney
Re: Miscompilation of remainder expressions
Ian Lance Taylor wrote: Robert Dewar <[EMAIL PROTECTED]> writes: Ian Lance Taylor wrote: We do want to generate a trap for x / 0, of course. Really? Is this really defined to generate a trap in C? I would be surprised if so ... As far as I know, but I think it would be a surprising change for x / 0 to silently continue executing. But perhaps not a very important one. It depends on the front-end language. For C, perhaps is would not matter. For Java, the language specification requires an ArithmaticException to be thrown. In libgcj this is done by having the operation trap and the trap handler generates the exception. Because libgcj already handles all of this, it was brought up that a similar runtime trap handler could easily be used for C. However as others have noted, the logistics of universally using a trap handler in C might be difficult. David Daney
Re: gcc doesn't build on ppc
>>>>> Mike Stump writes: Mike> gcc doesn't build on powerpc-apple-darwin9: Mike> ../../gcc/gcc/config/rs6000/rs6000.c: In function ârs6000_emit_vector_compareâ: Mike> ../../gcc/gcc/config/rs6000/rs6000.c:11904: warning: ISO C90 forbids mixed declarat Mike> ions and code Is this due to Josh's patch? David
Re: raising minimum version of Flex
> > > I think it's worth raising the minimum required version from 2.5.4 to > > > 2.5.31. > > > > I want to point out that Fedora Core 5 appears to still ship flex > > 2.5.4. At least, that is what flex --version reports. (I didn't > > bother to check this before.) I think we need a very strong reason to > > upgrade our requirements ahead of common distributions. We've already > > run into that problem with MPFR. > > For MPFR, everyone needs to have the latest installed to be able to > build gcc. That is not the case with flex. No-one needs flex at all to > build gcc, except gcc hackers who modify one of the (two or three?) > remaining flex files and regenerate the lexers. So you can't really > compare flex and MPFR this way. > > If flex 2.5.31 is already four years old, it doesn't seem unreasonable > to me to expect people to upgrade if their distribution ships with an > even older flex. To add another data point concerning flex, the C skeleton used by 2.5.33 is no longer warning-free, due to some signed/unsigned comparison, IIRC. (2.5.31 and earlier are OK.) I sent in an obvious patch to fix it and was turned down. If their "not-my-problem" policy persists, it will inconveniences projects that use a -Werror policy, which may adversly impact gcc bootstrapping, for example. From some perspectives, 2.5.31 could be a *maximum* version until that particular problem is fixed. (I've got other beef with flex 2.5.33 as well.) Maybe enough wheels squeak at them, they can be convinced to fix such problems that they consider insignificant? Fang
Re: [c++] switch ( enum ) vs. default statment.
On 1/23/07, Paweł Sikora <[EMAIL PROTECTED]> wrote: typedef enum { X, Y } E; int f( E e ) { switch ( e ) { case X: return -1; case Y: return +1; } + throw runtime_error("invalid value got shoehorned into E enum") } In this example g++ produces a warning: e.cpp: In function 'int f(E)': e.cpp:9: warning: control reaches end of non-void function Adding `default' statemnet to `switch' removes the warning but in C++ out-of-range values in enums are undefined. nevertheless, that integer type might get its bits twiddled somehow.
Re: [RFC] Our release cycles are getting longer
Marcin Dalecki wrote: Wiadomość napisana w dniu 2007-01-23, o godz23:54, przez Diego Novillo: So, I was doing some archeology on past releases and we seem to be getting into longer release cycles. With 4.2 we have already crossed the 1 year barrier. For 4.3 we have already added quite a bit of infrastructure that is all good in paper but still needs some amount of TLC. There was some discussion on IRC that I would like to move to the mailing list so that we get a wider discussion. There's been thoughts about skipping 4.2 completely, or going to an extended Stage 3, etc. Thoughts? Just forget ADA and Java in mainstream. Both of them are seriously impeding casual contributions. I missed the discussion on IRC, but neither of those front-ends are release blockers. I cannot speak for ADA, but I am not aware that the Java front-end has caused any release delays recently. I am sure you will correct me if I have missed something. David Daney
Re: [RFC] Our release cycles are getting longer
On Tue, 23 Jan 2007 17:54:10 -0500, Diego Novillo <[EMAIL PROTECTED]> said: > So, I was doing some archeology on past releases and we seem to be > getting into longer release cycles. Interesting. I'm a GCC observer, not a participant, but here are some thoughts: As far as I can tell, it looks to me like there's a vicious cycle going on. Picking an arbitrary starting point: 1) Because lots of bugs are introduced during stage 1 (and stage 2), stage 3 takes a long time. 2) Because stage 3 takes a long time, development branches are long-lived. (After all, development branches are the only way to do work during stage 3.) 3) Because development branches are long-lived, the stage 1 merges involve a lot of code. 4) Because the stage 1 merges involve a lot of code, lots of bugs are introduced during stage 1. (After all, code changes come with bugs, and large code changes come with lots of bugs.) 1) Because lots of bugs are introduced during stage 1, stage 3 takes a long time. Now, the good news is that this cycle can be a virtuous cycle rather than a vicious cycle: if you can lower one of these measurements (length of stage 3, size of branches, size of patches, number of bugs), then the other measurements will start going down. "All" you have to do is find a way to mute one of the links somehow, focus on the measurement at the end of that link, and then things will start getting better. It's not obvious to what the best way is to do that, but here are some ideas. Taking the links one by one: 1: Either fix bugs faster, or release with more bugs. 2: Artificially shorten the lifespan of development branches somehow, so that big branches don't appear during stage 3. 3: Throttle the size of patches: don't let people do gigantic merges, no matter the size of the branch. 4: Don't have buggy code in your branches: improve code quality of development branches somehow. I'm not optimistic about breaking either the link 1 or link 2. The first alternative in link 1 is hard (especially without a strong social contract), and the second alternative in link 1 is, to say the least, distasteful. Link 2 is similarly hard to fix without a strong social contract. So I would focus on either link 3 or link 4. For link 3, you'd change the rules to alternate between stage 1 and stage 3 on a fast basis (no stage 2 would be necessary): do a small merge (of a portion of a branch, if necessary), shake out bugs, and repeat. Concretely, you could have two rules in GCC's development process: * Patches more than a certain size aren't allowed. * No patches are allowed if there are more than X release-blocking bugs outstanding. (For some small value of X; 0 is one possibility.) With this, the trunk is almost always in a releasable state; you can release almost whenever you want to, since you'd basically be at the end of stage 3 every week, or every day, or every hour. Moving to these rules would be painful, but once you start making progress, I bet you'd find that, for example, the pressures leading to long-lived branches will diminish. (Not go away, but diminish.) For 4, you should probably spend some time figuring out why bugs are being introduced into the code in the first place. Is test coverage not good enough? If so, why - do people not write enough tests, is it hard to write good enough tests, something else? Is the review process inadequate? If so, why: are rules insufficiently stringent, are reviewers sloppy, are there not enough reviewers, are patches too hard to review? My guess is that most or all of those are factors, but some are more important than others. My favorite tactic to decrease the number of bugs is to set up a unit test framework for your code base (so you can test changes to individual functions without having to run the whole compiler), and to strongly encourage patches to be accompanied by unit tests. And, of course, you could attack both links 3 and 4 at once. David Carlton [EMAIL PROTECTED]
Re: [RFC] Our release cycles are getting longer
On Wed, 24 Jan 2007 03:02:19 +0100, Marcin Dalecki <[EMAIL PROTECTED]> said: > Wiadomość napisana w dniu 2007-01-24, o godz02:30, przez David Carlton: >> For 4, you should probably spend some time figuring out why bugs are >> being introduced into the code in the first place. Is test coverage >> not good enough? > It's "too good" to be usable. The time required for a full test > suite run can be measured by days not hours. That's largely because individual tests in the test suite are too long, which in turn is because the tests are testing code at a per-binary granularity: you have to run all of gcc, or all of one of the programs invoked by gcc, to do a single test. (Is that true? Please correct me if I'm wrong.) Well-written unit tests take milliseconds to execute: it's quite possible to run hundreds of unit tests in a second, ten thousand unit tests in a single minute. (I will give examples below.) Of course, you need many unit tests to get the coverage that a single end-to-end test gives you; then again, unit tests let you test code with much more precision than end-to-end tests. I'm not going to argue against having a good end-to-end test suite around, but it would be quite doable, over the course of a couple of years, to move to a model where a commit required about 10 minutes of testing (including all the unit tests and a few smoke end-to-end tests), and you had separate, automated runs of nightly end-to-end tests that caught problems that slipped through the unit tests. (And, of course, whenever the nightly tests detected problems, you'd update the unit tests accordingly.) >> If so, why - do people not write enough tests, is it >> hard to write good enough tests, something else? Is the review >> process inadequate? If so, why: are rules insufficiently stringent, >> are reviewers sloppy, are there not enough reviewers, are patches too >> hard to review? >> >> My guess is that most or all of those are factors, but some are more >> important than others. > No. The problems are entirely technical in nature. It's not a pure > human resources management issue. I don't think it's a pure human resources issue, but I don't think it's a purely technical issue, either, if for no other reason than that people are involved. >> My favorite tactic to decrease the number of >> bugs is to set up a unit test framework for your code base (so you can >> test changes to individual functions without having to run the whole >> compiler), and to strongly encourage patches to be accompanied by unit >> tests. > That's basically a pipe dream with the auto based build system. Why? What's so difficult about building one more (or a few more) unit test binaries along with the binaries you're building now? David Carlton [EMAIL PROTECTED] Here are numbers to back up my unit test timing claim; these are all run on a computer that cost lest than a thousand dollars a year ago. A C++ example, which is probably closest to your situation: panini$ time ./unittesttest . Tests finished with 69 passes and 0 failures. real0m0.013s user0m0.004s sys 0m0.004s A Java example: panini$ time java org.bactrian.dbcdb.AllTests . . . . . Time: 0.597 OK (173 tests) real0m1.109s user0m1.064s sys 0m0.024s And a Ruby example: panini$ time ruby -e "require 'dbcdb/test/all'" Loaded suite -e Started ... Finished in 0.039504 seconds. 63 tests, 110 assertions, 0 failures, 0 errors real0m0.150s user0m0.128s sys 0m0.016s No matter the language, you get between hundreds and thousands of tests a second; that C++ example works out to over 5000 tests a second.
Re: [RFC] Our release cycles are getting longer
On Tue, 23 Jan 2007 23:16:47 -0500 (EST), Andrew Pinski <[EMAIL PROTECTED]> said: > Let me bring up another point: > 0) bugs go unnoticed for a couple of releases and then become part of > the release criteria. Yeah, that's a good point. So maybe there's another feedback loop to consider: long release cycle => large-scale external testing only happens rarely (because lots of it only happens only after the branch has been cut) => lots of bugs are found all at once => long stage 3 => long release cycle I think it would be interesting to think of ways to spread out the discovery of bugs. >> 3: Throttle the size of patches: don't let people do gigantic >> merges, no matter the size of the branch. > This is wrong as the gigantic merges are needed in some cases to be > able change the infrastuture of GCC. A good example is recently > (and soon) are mem-ssa, GIMPLE_MODIFY_STMT, and dataflow. All > really could not be done by simple little patches. Tree-ssa was > another example. All I can say is that I have a pretty good amount of experience making large changes incrementally, and I'm not the only person who does. But I don't know nearly enough about the changes you mention to be able to say anything specific. (I don't read gcc-patches, just gcc.) >> * No patches are allowed if there are more than X release-blocking >> bugs outstanding. (For some small value of X; 0 is one >> possibility.) > I don't think this will work out because you are punishing all > developers while one developer gets his/her act together. Two responses: * For better or for worse, you're all in this together. * If you have fast bug-detection mechanisms (e.g. a build farm that runs all your tests on all your targets every night, or even multiple times a day), you can quickly revert offending patches. But, as you mentioned above, one of the problems is that bugs are being discovered some time after they are being introduced. Personally, the way I would deal with that is to spend some time doing root cause analysis of the bugs, and try to figure out ways in which they could have been detected faster. In other words, don't just fix the bug: go back to the patch that introduced the bug, think about how you could have made it easier for the submitter to have written test coverage that would have detected the bug, think about how you could have made it easier for the reviewer to have detected the bug. >> For 4, you should probably spend some time figuring out why bugs are >> being introduced into the code in the first place. > Some cases is because test coverage is not good enough in general, C++. > Other cases, you just did not think about a corner case. Even in other > cases, you exposed a latent bug in another part of the code which did > not think about a corner case. Personally, I find unit tests very helpful at reminding me to focus on corner cases, enabling me to test corner cases relatively easy, and helping other people not inadvertently break my corner cases. David Carlton [EMAIL PROTECTED]
Re: [RFC] Our release cycles are getting longer
On Wed, 24 Jan 2007 11:12:24 +0200, Michael Veksler <[EMAIL PROTECTED]> said: > Deterministic unit-tests are almost useless in long lived projects, I think you might be using the term "unit test" differently from me? Nothing is more valuable for a long-lived project than having unit tests covering every line of code, every branch, every boundary condition. I don't remember the reason for all of my own coding decisions six months ago, let alone somebody else's coding decisions years ago, but if pervasive unit tests are in place, it doesn't matter nearly as much: they will give a helpful reminder if I've inadvertently broken something. In some cases, adding randomness can improve the quality of the test suite, but deterministic tests are hugely valuable as well. David Carlton [EMAIL PROTECTED]
Re: [RFC] Our release cycles are getting longer
On Wed, 24 Jan 2007 17:26:32 -0500 (EST), Andrew Pinski <[EMAIL PROTECTED]> said: >> That's largely because individual tests in the test suite are too >> long, which in turn is because the tests are testing code at a >> per-binary granularity: you have to run all of gcc, or all of one >> of the programs invoked by gcc, to do a single test. (Is that true? >> Please correct me if I'm wrong.) > No, they are almost all 1 or 2 functions long, nothing more than 20 > lines. Sorry, I should have been clearer: by "long" I meant long in test execution time, not in textual representation of the test. (Though having the latter be short is important, too!) There's a big different between tests that take, say, .1 to 1 second each and tests that take, say, .001 to .01 second each, especially if you have tens of thousands of test cases. > You really cannot do unit testing for parsing cases, sorry to say > that. I agree that, when testing parsing, it's frequently desirable to run through the entire parser: so this is more complicated than, say, testing a simple class interface. But I bet you can get a big speedup even here. Am I correct in thinking that you run gcc on each input test case, even if all you really want is the parser? Or at least a significant chunk of gcc? In that case, you should be able to speed this up significantly by just calling the parser itself and directly checking the resulting parse tree (or just checking what you care about, e.g. that no errors are emitted). No codegen necessary, no writing any output files, and you can probably do tricks to significantly reduce the time spent reading input files as well. I don't, offhand, see any barrier to testing the parsing of a 20 line chunk of test source code in just a few milliseconds. >> A C++ example, which is probably closest to your situation: > That is just for a simple C++ code. Our unit testing will be over > something like a million times larger than most unit testing which > case unit testing falls down. Sure, that example is testing a library that's only a couple thousand lines long. And the test coverage isn't quite as good as it should be; pretty close, though. Even so, I would be impressed if GCC is really a million times larger than that. :-) I have hands-on experience with unit testing C++ code bases of about a half-million lines of code. (Which started off as legacy code, and I'm sure it was in worse shape than GCC's.) Unit testing works there, and I see any obvious boundary in sight. David Carlton [EMAIL PROTECTED]
Re: Possible build problems with the "current" gcc
This really looks like a java problem, CCing java@ It looks like you are missing jack/jack.h On my FC6/x86_64 system these files are not even built, so I don't get the missing jack/jack.h error. Instead it builds the midi-alsa files. That is the only insight I can provide. David Daney George R Goffe wrote: Howdy, I got an email from Joe Buck who suggested that I fix a clock skew problem between 2 of my systems. I did this but this did not change the "other" problem with this build effort. A diff of the 2 sets of error messages showed that the clock problem did in fact disappear. Any ideas as to how to proceed with this one would be greatly appreciated. Regards and thanks, George... make[8]: Entering directory `/tools/tools/gcc/obj-i686-pc-linux-gnu/x86_64-unknown-linux-gnu/32/libjava/classpath/native/jni/midi-dssi' if /bin/bash ../../../libtool --mode=compile /tools/tools/gcc/obj-i686-pc-linux-gnu/./gcc/xgcc -B/tools/tools/gcc/obj-i686-pc-linux-gnu/./gcc/ -B/usr/lsd/Linux/x86_64-unknown-linux-gnu/bin/ -B/usr/lsd/Linux/x86_64-unknown-linux-gnu/lib/ -isystem /usr/lsd/Linux/x86_64-unknown-linux-gnu/include -isystem /usr/lsd/Linux/x86_64-unknown-linux-gnu/sys-include -m32 -DHAVE_CONFIG_H -I. -I../../../../../../../../gcc/libjava/classpath/native/jni/midi-dssi -I../../../include -I../../../../../../../../gcc/libjava/classpath/include -I../../../../../../../../gcc/libjava/classpath/native/jni/classpath -I../../../../../../../../gcc/libjava/classpath/native/jni/native-lib -W -Wall -Wmissing-declarations -Wwrite-strings -Wmissing-prototypes -Wno-long-long -O2 -g -O2 -m32 -MT gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.lo -MD -MP -MF ".deps/gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.Tpo" -c -o gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.lo ../../../../../../../../gcc/libjava/classpath/native/jni/midi-dssi/gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.c; \ then mv -f ".deps/gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.Tpo" ".deps/gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.Plo"; else rm -f ".deps/gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.Tpo"; exit 1; fi if /bin/bash ../../../libtool --mode=compile /tools/tools/gcc/obj-i686-pc-linux-gnu/./gcc/xgcc -B/tools/tools/gcc/obj-i686-pc-linux-gnu/./gcc/ -B/usr/lsd/Linux/x86_64-unknown-linux-gnu/bin/ -B/usr/lsd/Linux/x86_64-unknown-linux-gnu/lib/ -isystem /usr/lsd/Linux/x86_64-unknown-linux-gnu/include -isystem /usr/lsd/Linux/x86_64-unknown-linux-gnu/sys-include -m32 -DHAVE_CONFIG_H -I. -I../../../../../../../../gcc/libjava/classpath/native/jni/midi-dssi -I../../../include -I../../../../../../../../gcc/libjava/classpath/include -I../../../../../../../../gcc/libjava/classpath/native/jni/classpath -I../../../../../../../../gcc/libjava/classpath/native/jni/native-lib -W -Wall -Wmissing-declarations -Wwrite-strings -Wmissing-prototypes -Wno-long-long -O2 -g -O2 -m32 -MT gnu_javax_sound_midi_dssi_DSSISynthesizer.lo -MD -MP -MF ".deps/gnu_javax_sound_midi_dssi_DSSISynthesizer.Tpo" -c -o gnu_javax_sound_midi_dssi_DSSISynthesizer.lo ../../../../../../../../gcc/libjava/classpath/native/jni/midi-dssi/gnu_javax_sound_midi_dssi_DSSISynthesizer.c; \ then mv -f ".deps/gnu_javax_sound_midi_dssi_DSSISynthesizer.Tpo" ".deps/gnu_javax_sound_midi_dssi_DSSISynthesizer.Plo"; else rm -f ".deps/gnu_javax_sound_midi_dssi_DSSISynthesizer.Tpo"; exit 1; fi mkdir .libs /tools/tools/gcc/obj-i686-pc-linux-gnu/./gcc/xgcc -B/tools/tools/gcc/obj-i686-pc-linux-gnu/./gcc/ -B/usr/lsd/Linux/x86_64-unknown-linux-gnu/bin/ -B/usr/lsd/Linux/x86_64-unknown-linux-gnu/lib/ -isystem /usr/lsd/Linux/x86_64-unknown-linux-gnu/include -isystem /usr/lsd/Linux/x86_64-unknown-linux-gnu/sys-include -m32 -DHAVE_CONFIG_H -I. -I../../../../../../../../gcc/libjava/classpath/native/jni/midi-dssi -I../../../include -I../../../../../../../../gcc/libjava/classpath/include -I../../../../../../../../gcc/libjava/classpath/native/jni/classpath -I../../../../../../../../gcc/libjava/classpath/native/jni/native-lib -W -Wall -Wmissing-declarations -Wwrite-strings -Wmissing-prototypes -Wno-long-long -O2 -g -O2 -m32 -MT gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.lo -MD -MP -MF .deps/gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.Tpo -c ../../../../../../../../gcc/libjava/classpath/native/jni/midi-dssi/gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.c -fPIC -DPIC -o .libs/gnu_javax_sound_midi_dssi_DSSIMidiDeviceProvider.o /tools/tools/gcc/obj-i686-pc-linux-gnu/./gcc/xgcc -B/tools/tools/gcc/obj-i686-pc-linux-gnu/./gcc/ -B/usr/lsd/Linux/x86_64-unknown-linux-gnu/bin/ -B/usr/lsd/Linux/x86_64-unknown-linux-gnu/lib/ -isystem /usr/lsd/Linux/x86_64-unknown-linux-gnu/include -isystem /usr/lsd/Linux/x86_64-unknown-linux-gnu/sys-include -m32 -DHAVE_CONFIG_H -I. -I../../../../../../../../gcc/libjava/classpath/native/jni/midi-dssi -I../../../include -I../../../../
Re: Signed int overflow behavior in the security context
Paul Schlie wrote: On Fri, Jan 26, 2007 at 06:57:43PM -0500, Paul Schlie wrote: Robert Dewar wrote: People always say this, but they don't really realize what they are saying. This would mean you could not put variables in registers, and would essentially totally disable optimization. - can you provide an example of a single threaded program where the assignment of variable to a machine register validly changes its observable logical results? If the program has a hash table that stores pointers to objects, and the hash function depends on pointer addresses, then the choice to allocate some objects in registers rather than in stack frames will change the addresses. If the algorithm depends on the order of hash traversal, then -O2 will change its behavior. - if the compiler chooses to alias an object's logical storage location utilizing a register, and that object's logical address is well specified by a pointer whose value is itself subsequently utilized; it shouldn't have any logical effect on that object's logical pointer's value; as it's the responsibility of the compiler to preserve the semantics specified by the program. (however as you appear to be describing an algorithm attempting to rely on the implicit addresses of object storage locations resulting from an assumed calling or allocation convention; and as such assumptions are well beyond the scope of most typical language specifications; it' not clear that such an algorithm should ever be presumed to reliably work regardless of any applied optimizations?) Isn't that the gist of the entire overflow wraps issue? Signed overflow is undefined in C and always has been. It' not clear that any program that relies on it should ever be presumed to reliably work regardless of any applied optimization. Best to use a language that has no undefined behaviors if you don't want optimizations to change program behavior. Yes I know that for one reason or another, many people will not be using such a language. So we try to do our best with making gcc a useful compiler. David Daney
Re: Can C and C++ object files be linked into an executable?
Ray Hurst wrote: By the way, was this the correct place to post it? Ray Two very senior GCC developers have already answered your question in the same manner. If you review what they said, you will see that the answer is *no*. David Daney
Bootstrap failure in libjava...
On FC6 x86_64-pc-linux-gnu with the svn trunk r121257 configured like this: ../trunk/configure --with-gmp=/usr/local --with-mpfr=/usr/local --disable-multilib --enable-languages=c,c++,java I am seeing this failure when bootstrapping. It worked for me last week: /home/daney/gccsvn/native-trunk/gcc/gcj -B/home/daney/gccsvn/native-trunk/x86_64-unknown-linux-gnu/libjava/ -B/home/daney/gccsvn/native-trunk/gcc/ -fomit-frame-pointer -fclasspath= -fbootclasspath=../../../trunk/libjava/classpath/lib --encoding=UTF-8 -Wno-deprecated -fbootstrap-classes -g -O2 -c -fsource-filename=/home/daney/gccsvn/native-trunk/x86_64-unknown-linux-gnu/libjava/classpath/lib/classes -MT gnu/java/awt.lo -MD -MP -MF gnu/java/awt.deps @gnu/java/awt.list -o gnu/java/awt.o >/dev/null 2>&1 /home/daney/gccsvn/native-trunk/gcc/jc1: symbol lookup error: /home/daney/gccsvn/native-trunk/gcc/jc1: undefined symbol: __gmp_get_memory_functions make[3]: *** [gnu/java/awt/color.lo] Error 1 make[3]: *** Waiting for unfinished jobs.... David Daney
Re: Bootstrap failure in libjava...
Andrew Pinski wrote: On FC6 x86_64-pc-linux-gnu with the svn trunk r121257 configured like this: ../trunk/configure --with-gmp=/usr/local --with-mpfr=/usr/local --disable-multilib --enable-languages=c,c++,java I am seeing this failure when bootstrapping. It worked for me last week: /home/daney/gccsvn/native-trunk/gcc/gcj -B/home/daney/gccsvn/native-trunk/x86_64-unknown-linux-gnu/libjava/ -B/home/daney/gccsvn/native-trunk/gcc/ -fomit-frame-pointer -fclasspath= -fbootclasspath=../../../trunk/libjava/classpath/lib --encoding=UTF-8 -Wno-deprecated -fbootstrap-classes -g -O2 -c -fsource-filename=/home/daney/gccsvn/native-trunk/x86_64-unknown-linux-gnu/libjava/classpath/lib/classes -MT gnu/java/awt.lo -MD -MP -MF gnu/java/awt.deps @gnu/java/awt.list -o gnu/java/awt.o >/dev/null 2>&1 /home/daney/gccsvn/native-trunk/gcc/jc1: symbol lookup error: /home/daney/gccsvn/native-trunk/gcc/jc1: undefined symbol: __gmp_get_memory_functions Did you update GMP or MPFR in the last week? From: http://www.mpfr.org/faq.html When I link my program with MPFR, I get undefined reference to __gmp. Link your program with GMP. Assuming that your program is foo.c, you should link it using: cc link.c -lmpfr -lgmpMPFR library reference (-lmpfr) should be before GMP's one (-lgmp). Another solution is, with GNU ld, to give all the libraries inside a group: gcc link.c -Wl,--start-group libgmp.a libmpfr.a -Wl,--end-group See INSTALL file and ld manual for more details. Moreover, if several GMP versions are installed (e.g., one provided by the system and a new one installed by some user), you must make sure that the include and library search paths are consistent. Unfortunately, on various GNU/Linux machines, they aren't by default. Typical errors are: undefined reference to `__gmp_get_memory_functions' in make check when GMP 4.1.4 is installed in /usr/{include,lib} (provided by the system) and GMP 4.2.1 is installed in /usr/local/{include,lib} (installed by the user with configure, make, make install). -- Pinski Good point. I should have figured that out. I love this gmp/mpfr requirement. You keep reminding me, but after about two weeks I forget and it bites me again. Thanks, David Daney
Re: G++ OpenMP implementation uses TREE_COMPLEXITY?!?!
>>>>> Joe Buck writes: Joe> There you go again. Mark did not support or oppose rth's change, he just Joe> said that rth probably thought he had a good reason. He was merely Joe> opposing your personal attack. We're all human, we make mistakes, there Joe> can be better solutions. Joe> If you think that there's a problem with a patch, there are ways to say so Joe> without questioning the competence or good intentions of the person who Joe> made it. Have any of you considered that Steven was using hyperbole as a joke? Are some people so overly-sensitized to Steven that they assume the worst and have a knee-jerk reaction criticizing him? The issue began as a light-hearted discussion on IRC. Steven's tone came across as inappropriate in email without context. However, Mark's reply defending RTH was not qualified with "probably", which was an unfortunate omission, IMHO. Encouraging a more collegial tone on the GCC mailinglists is a good goal, but I hope that we don't over-react and create a larger problem. David
Re: Interesting build failure on trunk
Ismail Dönmez wrote: On Tuesday 30 January 2007 21:44:15 Eric Botcazou wrote: make STAGE1_CFLAGS="-O" BOOT_CFLAGS="-march=i686 -O2 -pipe -fomit-frame-pointer -U_FORTIFY_SOURCE" profiledbootstrap Do not set STAGE1_CFLAGS, you may run into bugs of the bootstrap compiler. And I am still getting floating point exception even with a bare make. Any way to debug this? Paste the failing gcc invocation from your make output into the shell but add -v so you can see the commands passed to the various compiler components. Then run the failing component in gdb with a command similar to that reported by gcc -v. At least that is the way I would do it. David Daney.
MIPS Wrong-code regression.
Richard, Sometime between 1/7 and 1/16 on the trunk I started getting wrong code on a bunch of java testcases under mipsel-linux. It looks related to (but not necessarily caused by) this patch: http://gcc.gnu.org/ml/gcc-patches/2006-03/msg01346.html For example if we examine the assembler output of the PR9577.java testcase, we see: . . . $LBB2: lw $2,40($fp) sw $2,24($fp) lw $2,24($fp) move$4,$2 .option pic0 jal _ZN4java4lang6ObjectC1Ev nop .option pic2 lw $28,16($fp) $LBE2: move$sp,$fp lw $31,36($sp) lw $fp,32($sp) addiu $sp,$sp,40 j $31 nop The call to _ZN4java4lang6ObjectC1Ev is being generated as non-pic, even though that symbol is defined in libgcj.so. The assembler and linker conspire to jump to address 0x for this call. It looks like the logic that decides if a symbol is external to the compilation unit is faulty. Any ideas about where it might have gone wrong? I will try to look into it more tomorrow. Thanks, David Daney
Re: MIPS Wrong-code regression.
Andrew Haley wrote: David Daney writes: > Richard, > > Sometime between 1/7 and 1/16 on the trunk I started getting wrong code > on a bunch of java testcases under mipsel-linux. > > It looks related to (but not necessarily caused by) this patch: > > http://gcc.gnu.org/ml/gcc-patches/2006-03/msg01346.html > > For example if we examine the assembler output of the PR9577.java > testcase, we see: > > . > . > . > $LBB2: > lw $2,40($fp) > sw $2,24($fp) > lw $2,24($fp) > move$4,$2 > .option pic0 > jal _ZN4java4lang6ObjectC1Ev > nop > > .option pic2 > lw $28,16($fp) > $LBE2: > move$sp,$fp > lw $31,36($sp) > lw $fp,32($sp) > addiu $sp,$sp,40 > j $31 > nop > > The call to _ZN4java4lang6ObjectC1Ev is being generated as non-pic, even > though that symbol is defined in libgcj.so. The assembler and linker > conspire to jump to address 0x for this call. > > It looks like the logic that decides if a symbol is external to the > compilation unit is faulty. > > Any ideas about where it might have gone wrong? Does http://gcc.gnu.org/ml/gcc/2007-01/msg01184.html fix this? Unfortunately no. The following output is generated with r121186 + Andrew.s patched class.c There are several problems with the generated code for the failing class: public class PR9577 { private native void sayHello (String[] s, Object o); public static void main (String[] args) { PR9577 x = new PR9577( ); x.sayHello( null, null); } } Note that this class has an implicit public constructor that does nothing other than call the super class (java.lang.Object) constructor. /home/build/gcc-build/gcc/gcj -v -B/home/build/gcc-build/mipsel-unknown-linux-gnu/libjava/ -B/home/build/gcc-build/gcc/ --encoding=UTF-8 -B/home/build/gcc-build/mipsel-unknown-linux-gnu/libjava/testsuite/../ /home/build/gcc/libjava/testsuite/libjava.cni/PR9577.jar -o j.s -S Here is the entire generated code for the constructor: .globl _ZN6PR9577C1Ev .ent_ZN6PR9577C1Ev .type _ZN6PR9577C1Ev, @function _ZN6PR9577C1Ev: $LFB2: .frame $fp,40,$31 # vars= 8, regs= 2/0, args= 16, gp= 8 .mask 0xc000,-4 .fmask 0x,0 .setnoreorder .setnomacro addiu $sp,$sp,-40 $LCFI0: sw $31,36($sp) $LCFI1: sw $fp,32($sp) $LCFI2: move$fp,$sp $LCFI3: .cprestore 16 sw $4,40($fp) $LBB2: lw $2,40($fp) sw $2,24($fp) lw $2,24($fp) move$4,$2 .option pic0 jal _ZN4java4lang6ObjectC1Ev nop .option pic2 lw $28,16($fp) $LBE2: move$sp,$fp lw $31,36($sp) lw $fp,32($sp) addiu $sp,$sp,40 j $31 nop .setmacro .setreorder $LFE2: .end_ZN6PR9577C1Ev Here are the problems I see: 1) The call to _ZN4java4lang6ObjectC1Ev is absolute instead of via the plt. That function is in a shared library not this compilation unit. 2) It is a public global method. $gp should be initialized, but it is not. If I compile it with -O3 -mshared I get: .globl _ZN6PR9577C1Ev .ent_ZN6PR9577C1Ev .type _ZN6PR9577C1Ev, @function _ZN6PR9577C1Ev: $LFB2: .frame $sp,32,$31 # vars= 0, regs= 1/0, args= 16, gp= 8 .mask 0x8000,-8 .fmask 0x,0 .setnoreorder .cpload $25 .setnomacro addiu $sp,$sp,-32 $LCFI0: sw $31,24($sp) $LCFI1: .cprestore 16 lw $25,%call16(_ZN4java4lang6ObjectC1Ev)($28) jalr$25 nop lw $28,16($sp) lw $31,24($sp) j $31 addiu $sp,$sp,32 .setmacro .setreorder $LFE2: .end_ZN6PR9577C1Ev This time the call to _ZN4java4lang6ObjectC1Ev *is* done via the plt, but we are using .cpload instead of having gcc generate the individual instructions and interleaving them with the $sp adjustment as it normally does. David Daney
Re: MIPS Wrong-code regression.
David Daney wrote: Andrew Haley wrote: David Daney writes: > Richard, > > Sometime between 1/7 and 1/16 on the trunk I started getting wrong code > on a bunch of java testcases under mipsel-linux. OK, it was r120621 (The gcj-elipse branch merge) where things started being broken. There were some large changes in the java front-end with this commit. > > It looks related to (but not necessarily caused by) this patch: > > http://gcc.gnu.org/ml/gcc-patches/2006-03/msg01346.html > > For example if we examine the assembler output of the PR9577.java > testcase, we see: > > . > . > . > $LBB2: > lw $2,40($fp) > sw $2,24($fp) > lw $2,24($fp) > move$4,$2 > .option pic0 > jal _ZN4java4lang6ObjectC1Ev > nop > > .option pic2 > lw $28,16($fp) > $LBE2: > move$sp,$fp > lw $31,36($sp) > lw $fp,32($sp) > addiu $sp,$sp,40 > j $31 > nop > > The call to _ZN4java4lang6ObjectC1Ev is being generated as non-pic, even > though that symbol is defined in libgcj.so. The assembler and linker > conspire to jump to address 0x for this call. > > It looks like the logic that decides if a symbol is external to the > compilation unit is faulty. > > Any ideas about where it might have gone wrong? Does http://gcc.gnu.org/ml/gcc/2007-01/msg01184.html fix this? Unfortunately no. The following output is generated with r121186 + Andrew.s patched class.c There are several problems with the generated code for the failing class: public class PR9577 { private native void sayHello (String[] s, Object o); public static void main (String[] args) { PR9577 x = new PR9577( ); x.sayHello( null, null); } } Note that this class has an implicit public constructor that does nothing other than call the super class (java.lang.Object) constructor. /home/build/gcc-build/gcc/gcj -v -B/home/build/gcc-build/mipsel-unknown-linux-gnu/libjava/ -B/home/build/gcc-build/gcc/ --encoding=UTF-8 -B/home/build/gcc-build/mipsel-unknown-linux-gnu/libjava/testsuite/../ /home/build/gcc/libjava/testsuite/libjava.cni/PR9577.jar -o j.s -S Here is the entire generated code for the constructor: .globl _ZN6PR9577C1Ev .ent_ZN6PR9577C1Ev .type _ZN6PR9577C1Ev, @function _ZN6PR9577C1Ev: $LFB2: .frame $fp,40,$31 # vars= 8, regs= 2/0, args= 16, gp= 8 .mask 0xc000,-4 .fmask 0x,0 .setnoreorder .setnomacro addiu $sp,$sp,-40 $LCFI0: sw $31,36($sp) $LCFI1: sw $fp,32($sp) $LCFI2: move$fp,$sp $LCFI3: .cprestore 16 sw $4,40($fp) $LBB2: lw $2,40($fp) sw $2,24($fp) lw $2,24($fp) move$4,$2 .option pic0 jal _ZN4java4lang6ObjectC1Ev nop .option pic2 lw $28,16($fp) $LBE2: move$sp,$fp lw $31,36($sp) lw $fp,32($sp) addiu $sp,$sp,40 j $31 nop .setmacro .setreorder $LFE2: .end_ZN6PR9577C1Ev Here are the problems I see: 1) The call to _ZN4java4lang6ObjectC1Ev is absolute instead of via the plt. That function is in a shared library not this compilation unit. 2) It is a public global method. $gp should be initialized, but it is not. If I compile it with -O3 -mshared I get: .globl _ZN6PR9577C1Ev .ent_ZN6PR9577C1Ev .type _ZN6PR9577C1Ev, @function _ZN6PR9577C1Ev: $LFB2: .frame $sp,32,$31 # vars= 0, regs= 1/0, args= 16, gp= 8 .mask 0x8000,-8 .fmask 0x,0 .setnoreorder .cpload $25 .setnomacro addiu $sp,$sp,-32 $LCFI0: sw $31,24($sp) $LCFI1: .cprestore 16 lw $25,%call16(_ZN4java4lang6ObjectC1Ev)($28) jalr$25 nop lw $28,16($sp) lw $31,24($sp) j $31 addiu $sp,$sp,32 .setmacro .setreorder $LFE2: .end_ZN6PR9577C1Ev This time the call to _ZN4java4lang6ObjectC1Ev *is* done via the plt, but we are using .cpload instead of having gcc generate the individual instructions and interleaving them with the $sp adjustment as it normally does. David Daney
Re: MIPS Wrong-code regression.
David Daney wrote: David Daney wrote: Andrew Haley wrote: David Daney writes: > Richard, > > Sometime between 1/7 and 1/16 on the trunk I started getting wrong code > on a bunch of java testcases under mipsel-linux. OK, it was r120621 (The gcj-elipse branch merge) where things started being broken. There were some large changes in the java front-end with this commit. I am testing the attached patch that seems to fix the problem. Richard, sorry for dragging you into this mess. David Daney Index: gcc/java/class.c === --- gcc/java/class.c (revision 121441) +++ gcc/java/class.c (working copy) @@ -2510,10 +2510,12 @@ tree method_name = DECL_NAME (method_decl); TREE_PUBLIC (method_decl) = 1; + /* Considered external unless it is being compiled into this object - file. */ - DECL_EXTERNAL (method_decl) = ((is_compiled_class (this_class) != 2) - || METHOD_NATIVE (method_decl)); + file, or it was already flagged as external. */ + if (!DECL_EXTERNAL (method_decl)) +DECL_EXTERNAL (method_decl) = ((is_compiled_class (this_class) != 2) + || METHOD_NATIVE (method_decl)); if (ID_INIT_P (method_name)) {
Re: MIPS Wrong-code regression.
Tom Tromey wrote: "David" == David Daney <[EMAIL PROTECTED]> writes: David> The call to _ZN4java4lang6ObjectC1Ev is being generated as non-pic, David> even though that symbol is defined in libgcj.so. The assembler and David> linker conspire to jump to address 0x for this call. Could also be the problem reported at the end of: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30606 Tom I suspect it is the same problem. APH's patch would not have fixed it if it were. David Daney
Re: MIPS Wrong-code regression.
Andrew Haley wrote: David Daney writes: > Tom Tromey wrote: > >>>>>> "David" == David Daney <[EMAIL PROTECTED]> writes: > > > > David> The call to _ZN4java4lang6ObjectC1Ev is being generated as non-pic, > > David> even though that symbol is defined in libgcj.so. The assembler and > > David> linker conspire to jump to address 0x for this call. > > > > Could also be the problem reported at the end of: > > > > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30606 > > > > Tom > > I suspect it is the same problem. APH's patch would not have fixed it > if it were. OK. Does your patch work? If it does, I'm going to trace through jc1 to see if I can find the real origin of this regression. I am testing a new patch that (I think) fixes the real problem. I am not sure why it regressed as Richard added the code that was being made to fail last March. The bad code was in class.c when green committed it over 8 years ago. Several months ago Tromey removed it and then added it back a few days later. The problem is that in is_compiled_class() we were erroneously saying that a candidate class was being emitted to the object file *if* it was the current_class being parsed. This does not hold because many classes are parsed that are not emitted so that jc1 can calculate the class layout and load the symbol tables. The real fix,I think, is the one I made to is_compiled_class(). I left the change to layout_class_method() where we don't re-check for DECL_EXTERNAL if it is already set as a micro-optimization. I tested both with and without this and obtained correct results, so it is not really needed. I also wonder if your previous patch setting DECL_EXTERNAL is still needed after this has been applied. I didn't check. I am currently regression testing the attached patch on x86_64-pc-linux-gnu, and will post it to gcc-patches@ if it passes. David Daney. Index: gcc/java/class.c === --- gcc/java/class.c (revision 121441) +++ gcc/java/class.c (working copy) @@ -2134,10 +2134,6 @@ is_compiled_class (tree class) return 1; if (TYPE_ARRAY_P (class)) return 0; - /* We have to check this explicitly to avoid trying to load a class - that we're currently parsing. */ - if (class == current_class) -return 2; seen_in_zip = (TYPE_JCF (class) && JCF_SEEN_IN_ZIP (TYPE_JCF (class))); if (CLASS_FROM_CURRENTLY_COMPILED_P (class)) @@ -2147,7 +2143,7 @@ is_compiled_class (tree class) been loaded already. Load it if necessary. This prevent build_class_ref () from crashing. */ - if (seen_in_zip && !CLASS_LOADED_P (class)) + if (seen_in_zip && !CLASS_LOADED_P (class) && (class != current_class)) load_class (class, 1); /* We return 2 for class seen in ZIP and class from files @@ -2161,7 +2157,7 @@ is_compiled_class (tree class) { if (CLASS_FROM_SOURCE_P (class)) safe_layout_class (class); - else + else if (class != current_class) load_class (class, 1); } return 1; @@ -2510,10 +2506,12 @@ layout_class_method (tree this_class, tr tree method_name = DECL_NAME (method_decl); TREE_PUBLIC (method_decl) = 1; + /* Considered external unless it is being compiled into this object - file. */ - DECL_EXTERNAL (method_decl) = ((is_compiled_class (this_class) != 2) - || METHOD_NATIVE (method_decl)); + file, or it was already flagged as external. */ + if (!DECL_EXTERNAL (method_decl)) +DECL_EXTERNAL (method_decl) = ((is_compiled_class (this_class) != 2) + || METHOD_NATIVE (method_decl)); if (ID_INIT_P (method_name)) {
US Daylight Savings Time Changes
Greetings, Are there any gcc-related issues with the upcoming changes to the Daylight Savings Time switch in the US starting this year? That is, will programs compiled with the gcc (excluding any third-party libraries) have any time-related issues this year? If so, are certain versions of gcc 2007-US-DST-change compliant and other versions not? I assume that gcc-compiled apps just get their time from the OS, so provided the OS is 2007-US-DST-change compliant then it will be OK, but need to verify. much appreciated, David Karnowski
Re: US Daylight Savings Time Changes
Joe Buck wrote: On Fri, Feb 09, 2007 at 03:52:54PM -0500, Karnowski, David wrote: Are there any gcc-related issues with the upcoming changes to the Daylight Savings Time switch in the US starting this year? That is, will programs compiled with the gcc (excluding any third-party libraries) have any time-related issues this year? If so, are certain versions of gcc 2007-US-DST-change compliant and other versions not? It's a library issue, not a compiler issu. GCC does however ship with libgcj the java runtime library. Some versions of libgcj may not be aware of this change. David Daney
Re: US Daylight Savings Time Changes
>>>>> Tom Tromey writes: Tom> David probably knows this, but for others, Jakub and Andrew put in a Tom> patch for this today. I think it is only on trunk, not any other Tom> branches. Should this be included in GCC 4.1.2? David
Re: Some thoughts and quetsions about the data flow infrastracture
>>>>> Vladimir Makarov writes: Vlad> Especially I did not like David Edelhson's phrase "and no new Vlad> private dataflow schemes will be allowed in gcc passes". It was not Vlad> such his first expression. Such phrases are killing competition which Vlad> is bad for gcc. What if the new specialized scheme is faster. What Vlad> if somebody decides to write another better df infrastructure from the Vlad> scratch to solve the coming df infrastructure problems. First, "another better df infrastructure" is not a private dataflow scheme. If someone else wants to rewrite it from scratch and do a better job, be my guest. Most commercial compilers rewrite their infrastructure every 5 to 10 years; GCC accretes kludges. The other commercial compilers learn from their algorithms and previous design to implement a new, maintainable infrastructure that meets the needs of all algorithms. Second, this has nothing to do with competition. As I and others explained on the IRC chat, the new df is general infrastructure. If you can speed it up more, that's great. If you need another dataflow problem solved, add it to the infrastructure. GCC is not served well by five (5) different dataflow solvers, each with its own quirks, bugs, duplicative memory and duplicative maintenance. It would be a waste to improve GCC's infrastructure and then have the hard work undermined by recreating the duplication without good justification. Third, I am disappointed that you chose to make this argument personal. David
gcc port to StarCore
Hi, I am converting my StarCore port of the gcc version 3.2 to the current version 4.1.1. The following program ( part of the gcc's testsuite ) void bar(int *pc) { static const void *l[] = {&&lab0, &&end}; foo(0); goto *l[*pc]; lab0: foo(0); pc++; goto *l[*pc]; end: return; } successfully compiled under 3.2 but under 4.1.1 I am getting the following error: x.c: In function ‘bar’: x.c:13: error: unrecognizable insn: (jump_insn 26 25 27 2 (set (pc) (reg:SI 52)) -1 (nil) (nil)) x.c:13: internal compiler error: in extract_insn, at recog.c:2084 The .md file defines ( mandatory ) "indirect_jump" which allows ( a certain kind of ) register as parameter. What is missing? Thank you in advance, David -- David Livshin http://www.dalsoft.com
Re: Some thoughts and quetsions about the data flow infrastracture
Do you realize how confrontational your emails sound? Have you considered asking about the technical reasoning and justification instead of making unfounded assertions? Do you want everyone to refute your incorrect facts point by point? David
Re: gcc port to StarCore
Ian Lance Taylor wrote: David Livshin <[EMAIL PROTECTED]> writes: x.c: In function bar: x.c:13: error: unrecognizable insn: (jump_insn 26 25 27 2 (set (pc) (reg:SI 52)) -1 (nil) (nil)) x.c:13: internal compiler error: in extract_insn, at recog.c:2084 The .md file defines ( mandatory ) "indirect_jump" which allows ( a certain kind of ) register as parameter. What is missing? Either the pattern doesn't match indirect_jump, or the instruction predicate returns zero, or the operand predicate returns zero. Beyond that we don't have enough information to say. Ian Is there a way to find out what was the pattern? And if the operand predicate fails, shouldn't gcc attempt to reassign the ( register , "(reg:SI 52)" ) operand in order to satisfy the predicate? -- David Livshin http://www.dalsoft.com
Re: Some thoughts and quetsions about the data flow infrastracture
>>>>> Jeffrey Law writes: Jeff> I think everyone would be best served if they realized none of this is Jeff> personal -- it's about technical decisions in an effort to improve GCC. GCC development is far from perfect. The recent model generally seems to be effective, although there is plenty of room for improvement. I am not trying to discourage feedback and comments, but complaining is easy -- throwing stones is easy. The question is: How are we going to improve it and/or fix it? There are many things that we may want to do, but what can we do? And most importantly: Are the alternatives really better or just look better from the outside? This discussion has strayed from the specific dataflow topic, but for that specific project (and other projects), I would encourage the GCC community to get involved and improve the design and impleentation of the feature. Waiting for perfect is not a good strategy. Competition is good, but GCC developers generally do not reject offers of assistance. A lot of separate, duplicative projects may not get done, while compromising on a single design with everyone contributing to the implementation has a better chance of completion. In the long run, we get a single, functional, complete, good but imperfect, and easier to maintain feature. David
Re: Some thoughts and quetsions about the data flow infrastracture
>>>>> Vladimir Makarov writes: Vlad> I am just trying to convince that the proposed df infrastructure is not Vlad> ready and might create serious problems for this release and future Vlad> development because it is slow. Danny is saying that the beauty of the Vlad> infrastracture is just in improving it in one place. I am agree in this Vlad> partially. I am only affraid that solution for faster infrastructure Vlad> (e.g. another slimmer data representation) might change the interface Vlad> considerably. I am not sure that I can convinince in this. But I am Vlad> more worried about 4.3 release and I really believe that inclusion of Vlad> the data flow infrastructure should be the 1st step of stage 1 to give Vlad> people more time to solve at least some problems. DF has been successfully tested on many more targets than originally requested by the GCC SC. The original requirement for targets was the same as for the Tree-SSA merge. Tree-SSA continued to be cleaned up, fixed, and improved after it was merged. Tree-SSA performance improved by the time of the release and was not required to be perfect on day one. DF will be good when merged and will continue to improve on mainline in Stage 2. GCC previously has not had a requirement that a patch be committed at the beginning of Stage 1. We understand your concerns, but unsubstantiated assertions like "might create serious problems..." are not very helpful or convincing arguments. You are selectively quoting other developers and pulling their comments out of context to support your objections. Why, specifically, is the df infrastructure not ready? Have you investigated the current status? Have you looked at the design documents, implementation, and comments? Have you followed the mailinglist discussions and patches? Why is it unacceptable for it to mature further on mainline like Tree-SSA? Why is it better to delay merging an announced, planned and approved project that the developers believe is ready, which will impose the complexity and work of maintaining a branch with invasive changes for a full release cycle? It took a long time to fix all of the current users of dataflow and recent mainline patches continue to introduce new bugs. Why are the discussions about the current performance, known performance problems, and specific plans for performance improvement throughout the rest of the release cycle insufficient to address your concerns? David
Re: Some thoughts and quetsions about the data flow infrastracture
>>>>> Vladimir Makarov writes: Vlad> I did investigate the current status of the infrastructure on future Vlad> mainstream processor Core2 (> 11% slower compiler, worse code and bigger Vlad> code size). That is the reason why I started this. You do not believe that this is a concern of others? You do not believe that this will be addressed after the merge? This could be an example of GCC optimization becoming more aggressive with increased accuracy, causing increased register pressure problems, which is particularly detrimental to GCC for IA-32. Why don't we analyse and fix any problems instead of trying to keep GCC's infrastructure weak and stupid to cover up its inadequacies? Complaining about and blocking the merge of df does not solve the problem, it only delays it. David
Re: 40% performance regression SPEC2006/leslie3d on gcc-4_2-branch
>>>>> Vladimir Sysoev writes: Vladimir> It looks like your changeset listed bellow makes performance Vladimir> regression ~40% on SPEC2006/leslie3d. I will try to create minimal Vladimir> test for this issue this week and update you in any case. I believe that this is known and expected. GCC 4.2 includes some conservative alias analysis fixes for correctness that hurt performance. David
RS6000 call pattern clobbers
Richard, While fixing ports in preparation for the new dataflow infrastructure, we found a problem with the way that the rs6000 port represents clobbers and uses of registers in call and sibcall patterns. The patterns clobber and use the rs6000 link register as a match_scratch with constraint of the link register class: (clobber (match_scratch:SI 0 "=l")) instead of clobbering the link register hard register directly in the early insn generation. This style dates to the original rs6000 port. A naked use that starts as a pseudo causes problems for dataflow. Do you remember why you wrote the call patterns this way? Was there a problem with reload and clobbers of hard registers in a register class containing a single register or some other historical quirk? Thanks, David
Re: getting spam
Alexander wrote: Hi. These days I began to get spam in my mail box. I found, that my mail address ([EMAIL PROTECTED]) is published on : http://gcc.gnu.org/ml/gcc/2006-08/msg00227.html Please, remove it from there, thanks. And now it is here also: http://gcc.gnu.org/ml/gcc/2007-02/msg00606.html But this time it is not mangled because you put it in the body of the message. These are public mailing lists. They are archived in many places. It is not possible to cancel/remove a message once it has been sent. David Daney
Re: vcond implementation in altivec
>>>>> Devang Patel writes: >> Is there a reason why op0 is V4SF Devang> It is destination so, yes this is wrong. >> and op1 is V4SI (and not V8HI)? Devang> condition should be v4si, but it is not op1. So this is also not correct. >> And also, why not use if_then_else instead of unspec (in all vcond's)? Devang> I did not try that path. May be I did not know about it at that time. Patches welcome. David
Re: We're out of tree codes; now what?
I thought that the Tuples conversion was suppose to address this in the long term. David
Re: Question for removing trailing whitespaces (not vertical tab) from source
>>>>> Kai Tietz writes: > Also I wrote, while doing a small tool for that, a > feature to replace horiz. tabs by spaces. But the question is by which > width should be used ? Tabs always are equivalent to 8 spaces. But please DO NOT replace tabs in the GCC sources with spaces. Eight spaces should be tabs. Thanks, David
MIPS bootstrap broken, make compare fails...
Bootstrapping the trunk (Revision: 122847) on a mipsel-linux system configured thusly: $ ../gcc/configure --with-arch=mips32 --with-float=soft --disable-java-awt --without-x --disable-tls --enable-__cxa_atexit --disable-jvmpi --disable-static --disable-libmudflap --enable-languages=c,c++,java I get: $ make compare Comparing stages 2 and 3 warning: ./cc1-checksum.o differs warning: ./cc1plus-checksum.o differs Bootstrap comparison failure! ./cp/call.o differs ./cp/decl.o differs ./cp/pt.o differs ./cp/class.o differs ./cp/error.o differs ./cp/rtti.o differs ./cp/init.o differs ./cp/search.o differs ./cp/semantics.o differs ./cp/tree.o differs ./cp/mangle.o differs ./cp/name-lookup.o differs ./java/class.o differs ./java/verify-impl.o differs ./java/jcf-parse.o differs ./java/jcf-dump.o differs ./build/genmodes.o differs ./build/rtl.o differs ./build/gensupport.o differs ./build/genflags.o differs ./build/genpreds.o differs ./build/genconfig.o differs ./build/genattrtab.o differs ./build/genautomata.o differs ./build/genextract.o differs ./c-pragma.o differs ./c-decl.o differs ./c-typeck.o differs ./c-common.o differs ./c-opts.o differs ./c-format.o differs ./prefix.o differs ./ggc-page.o differs ./alias.o differs ./bt-load.o differs ./builtins.o differs ./caller-save.o differs ./cfg.o differs ./cfgbuild.o differs ./cfgcleanup.o differs ./cfgexpand.o differs ./cfglayout.o differs ./cfgloop.o differs ./cfgloopanal.o differs ./cfgloopmanip.o differs ./cfgrtl.o differs ./combine.o differs ./coverage.o differs ./cse.o differs ./cselib.o differs ./ddg.o differs ./df-core.o differs ./df-problems.o differs ./df-scan.o differs ./dominance.o differs ./dwarf2asm.o differs ./dwarf2out.o differs ./emit-rtl.o differs ./except.o differs ./expmed.o differs ./final.o differs ./flow.o differs ./fold-const.o differs ./function.o differs ./gcse.o differs ./gimplify.o differs ./global.o differs ./gtype-desc.o differs ./haifa-sched.o differs ./jump.o differs ./lambda-code.o differs ./local-alloc.o differs ./loop-invariant.o differs ./loop-iv.o differs ./loop-unroll.o differs ./lower-subreg.o differs ./modulo-sched.o differs ./omega.o differs ./omp-low.o differs ./opts.o differs ./passes.o differs ./pointer-set.o differs ./postreload-gcse.o differs ./postreload.o differs ./profile.o differs ./real.o differs ./recog.o differs ./regclass.o differs ./regmove.o differs ./regrename.o differs ./reload.o differs ./reload1.o differs ./rtl.o differs ./rtlanal.o differs ./sbitmap.o differs ./sched-deps.o differs ./sched-rgn.o differs ./simplify-rtx.o differs ./stmt.o differs ./struct-equiv.o differs ./toplev.o differs ./tree-cfg.o differs ./tree-complex.o differs ./tree-data-ref.o differs ./tree-dump.o differs ./tree-eh.o differs ./tree-if-conv.o differs ./tree-outof-ssa.o differs ./tree-pretty-print.o differs ./tree-ssa-address.o differs ./tree-ssa-coalesce.o differs ./tree-ssa-dom.o differs ./tree-ssa-live.o differs ./tree-ssa-loop-im.o differs ./tree-ssa-loop-ivopts.o differs ./tree-ssa-loop-manip.o differs ./tree-ssa-loop-niter.o differs ./tree-ssa-operands.o differs ./tree-ssa-phiopt.o differs ./tree-ssa-pre.o differs ./tree-ssa-structalias.o differs ./tree-ssa.o differs ./tree-vect-analyze.o differs ./tree-vect-patterns.o differs ./tree-vect-transform.o differs ./tree-vectorizer.o differs ./tree-vn.o differs ./tree.o differs ./value-prof.o differs ./var-tracking.o differs ./varasm.o differs ./mips.o differs ./cgraphunit.o differs ./ipa-inline.o differs ./ipa-utils.o differs ./gcov.o differs ./gcc.o differs ./jvspec.o differs make: *** [compare] Error 1 My last successful bootstrap on this system was from March 6 (revision 122630). I am going to update and try again. David Daney
Re: GCC 4.2 branch comparision failure building Java
Paolo Bonzini wrote: For 4.3, we can use --enable-stage1-languages=all when building the RCs. I can prepare a patch to do that automatically when --enable-generated-files-in-srcdir is passed. That should not be needed on the trunk, as the .y files in question (gcc/java/parse.y and gcc/java/parse-scan.y) do not exist on the trunk. They have been removed. David Daney
Re: Google SoC Project Proposal: Better Uninitialized Warnings
>>>>> Joe Buck writes: Joe> What worries me is that we can't afford to make -O0 run significantly Joe> slower than it does now. Cycle speeds are no longer increasing, we have Joe> to be very careful about slowing things down. Adding more passes does not necessarily slow down the compiler, as IBM found with XLC. If one can remove enough dead code / statements / insns / IR, one performs more processing on less data leading to less overall work and faster compilation at -O0. David
Tobias Burnus and Brooks Moses appointed Fortran maintainers
I am pleased to announce that the GCC Steering Committee has appointed Tobias Burnus and Brooks Moses as Fortran maintainers. Please join me in congratulating Tobias and Brooks on their new role. Tobias and Brooks, please update your listings in the MAINTAINERS file. Happy hacking! David