Re: Potential fix for rdar://4658012
[I added the gcc list too since this is more than just a discussion about a single patch.] > > So, I went looking for an approach which would fix this in the C++ > > front-end instead. However, I discovered that the C front-end has a > > similar problem. > > And so, not changing the > > middle-end would mean changing both C and C++ front-ends (note that this > > isn't a gcc 3.x regression for C). > Given that this fixes a quite serious regression and the Ada failure on > Sparc was quite vague I would argue to put it in 4.3 and later backport > it to 4.2.1 if this problem no longer occurs. Or to put that on the burden > of Ada/Sparc completely. I think it's a dangerous precedent to start judging patches by who the burden is on if it breaks something or which is the smaller patch. If C and C++ have the same bug, that's hardly surprising given their common lineage. I think we should be judging patches purely on technical grounds. It would certainly be worth exhibiting the C/C++ patch so people could see if it was correct. As to the middle-end patch, like any other similar patch, if you remove code that says that something might be used in a certain way, you bear the burden of proving that it can't be used in that way. In this case, as I said, it might well be the case that this can no longer happen. Indeed, a lot of the similar code for temporaries might not be needed anymore. And I'd argue that now that we have GIMPLE, we could arrange things so that NO temporaries are allocated by the middle-end and that would be a good thing. I spent a lot of time in the old days dealing with obscure bugs in that code and it would be good for it to all go away. But I'm not comfortable with simply deleting a piece of it now without the proper analysis. I agree the regression is serious, but let's see both possibilities (middle-end and front-end) so a technical decision can be made as to the best approach.
Re: Potential fix for rdar://4658012
On 8/26/06, Richard Kenner <[EMAIL PROTECTED]> wrote: [I added the gcc list too since this is more than just a discussion about a single patch.] > > So, I went looking for an approach which would fix this in the C++ > > front-end instead. However, I discovered that the C front-end has a > > similar problem. > > And so, not changing the > > middle-end would mean changing both C and C++ front-ends (note that this > > isn't a gcc 3.x regression for C). > Given that this fixes a quite serious regression and the Ada failure on > Sparc was quite vague I would argue to put it in 4.3 and later backport > it to 4.2.1 if this problem no longer occurs. Or to put that on the burden > of Ada/Sparc completely. I think it's a dangerous precedent to start judging patches by who the burden is on if it breaks something or which is the smaller patch. If C and C++ have the same bug, that's hardly surprising given their common lineage. I think we should be judging patches purely on technical grounds. It would certainly be worth exhibiting the C/C++ patch so people could see if it was correct. As to the middle-end patch, like any other similar patch, if you remove code that says that something might be used in a certain way, you bear the burden of proving that it can't be used in that way. In this case, as I said, it might well be the case that this can no longer happen. Indeed, a lot of the similar code for temporaries might not be needed anymore. And I'd argue that now that we have GIMPLE, we could arrange things so that NO temporaries are allocated by the middle-end and that would be a good thing. I spent a lot of time in the old days dealing with obscure bugs in that code and it would be good for it to all go away. But I'm not comfortable with simply deleting a piece of it now without the proper analysis. I agree the regression is serious, but let's see both possibilities (middle-end and front-end) so a technical decision can be made as to the best approach. I completely agree. But only up to the point defining "proper analysis" - bootstrapping and regtesting is required for a patch to be accepted and I think it is a valid request from your side to require testing of Ada on Sparc for this patch as you remember problems on that platform. Given that this succeeds, requiring further "proper analysis" or proof or whatever is putting the burden on the wrong side and not reasonable. (I don't know if there is a way to do this Ada on Sparc testing with a cross compiler on a more available platform - maybe you can suggest a target triple that has a simulator available?) Thanks, Richard.
Re: Potential fix for rdar://4658012
> I completely agree. But only up to the point defining "proper analysis" - > bootstrapping and regtesting is required for a patch to be accepted and > I think it is a valid request from your side to require testing of Ada on > Sparc for this patch as you remember problems on that platform. Given that > this succeeds, requiring further "proper analysis" or proof or whatever is > putting the burden on the wrong side and not reasonable. I disagree. Testing is not, and should never be, a substitute for analysis. A patch is proposed because we have a reason to believe it's correct. Then we test to increase our confidence that it is, indeed, correct. But both parts are essential for any patch. Here we have a situation where we'd like to see some optimization done (the merging of temporaries). There's some code that suppresses that optimization in one case because that optimization was unsafe in that case. It is not acceptable to find out whether that code is still needed by merely running a test suite: there has to be some analysis that says what changed to make that code no longer needed. The burden of proof on that analysis has to be on the person who's proposing that the code is no longer needed.
Re: fp-int-convert-timode, TImode and Darwin
On Aug 25, 2006, at 7:35 PM, Eric Christopher wrote: Yes, it's a necessary part of the x86_64 work - the question is whether or not x86_64-darwin might go in for 4.2 at all. Mark has recently stated his position (http://gcc.gnu.org/ml/gcc- patches/2006-08/msg00924.html) on patches that are vaguely similar. I'd be happy to review those patches for consideration for 4.2 and to nominate them for Mark's blessing. I'd like to see 4.2 support x86_64-darwin. Just to let people know, we have already developed the work and GMed/ FCS the work, so it should be near releasable quality already.
Re: REG_OK_STRICT and EXTRA_CONSTRAINT
Paolo Bonzini <[EMAIL PROTECTED]> writes: >> If the general replacement of REG_OK_STRICT is indeed >> reload_in_progress || reload_completed, then the substitution >> *should* of course be in principle be correct (as in: subject to >> testing. ;) > Sure. After I'm done with the base_reg_class changes, I will try > modifying address_operand to be something along the lines of your U > constraint: > > return >strict >? strict_memory_address_p (Pmode, x) >: memory_address_p (Pmode, x), > > which I'm of course hoping to write as > > return >reload_in_progress || reload_completed >? strict_memory_address_p (Pmode, x) >: memory_address_p (Pmode, x), > > This will also affect the 'p' constraint, and in the end address your > "FIXME: This is arguably a bug in gcc." Of course, again, this is > subject to testing. Have you prototyped this sort of change to see what kind of effect it might have on a friendly target? I'm not sure it's right. reload sometimes needs to change and re-recognise instructions with unreloaded operands while reload_in_progress. (See e.g. eliminate_regs_in_insn.) I think it's important to let reload choose between strict and non-strict matching. Richard
VIEW_CONVERT_EXPR vs alias pass
If we have the following IR (before the first may_alias pass): f1 (a) { short int b; short unsigned int b.2; short int b.1; int D.1525; short unsigned int a.0; : a.0_2 = (short unsigned int) a_1; # b_4 = V_MUST_DEF ; VIEW_CONVERT_EXPR(b) = a. b.1_5 = b; b.2_6 = (short unsigned int) b.1_5; D.1525_7 = (int) b.2_6; return D.1525_7; } The may_alias pass removes the TREE_ADDRESSABLE on b so we ICE in the checking pass after may_alias runs. Does someone have an idea on where it is going wrong? I am trying to fix PR 26069 but am running into this ICE for the following code: unsigned short f1(short a) { short b; *(unsigned short*)&b = a; return b; } I don't know if this problem shows up in Ada code but it seems like it could. The error I get is: t.c: In function ‘f1’: t.c:11: error: statement makes a memory store, but has no V_MAY_DEFS nor V_MUST_DEFS VIEW_CONVERT_EXPR(b_9) = a.0_2; t.c:11: internal compiler error: verify_ssa failed Please submit a full bug report, with preprocessed source if appropriate. See http://gcc.gnu.org/bugs.html> for instructions. Thanks, Andrew Pinski
Re: libstdc++, -m64 and can't find atom for N_GSYM stabs
Eric, I just reran "make -k check RUNTESTFLAGS='--target_board "unix{-m64}"'" in the darwin_objdir/powerpc-apple-darwin8/libstdc++-v3 directory after applying the following patch to suppress the "can't find atom for N_GSYM stabs" ld64 linker warnings... --- gcc-4.2-20060825/libstdc++-v3/testsuite/lib/prune.exp.org 2006-08-26 11:22:52.0 -0400 +++ gcc-4.2-20060825/libstdc++-v3/testsuite/lib/prune.exp 2006-08-26 11:23:39.0 -0400 @@ -29,5 +29,7 @@ regsub -all "(^|\n)\[^\n\]*: Additional NOP may be necessary to workaround Itanium processor A/B step errata" $text "" text regsub -all "(^|\n)\[^\n*\]*: Assembler messages:\[^\n\]*" $text "" text +regsub -all "(^|\n)can't find atom for N_GSYM stabs \[^\n\]* in \[^\n\]*" $text "" text + return $text } Once the noise from those linker warnings is removed from the libstdc++-v3 testsuite results at -m64 on Darwin PPC, we find that the failures drop from 54 to just 6. So we actually only have four additional libstdc++-v3 testsuite failures at -m64 compared to -m32. These are... FAIL: 21_strings/basic_string/cons/char/1.cc execution test FAIL: 21_strings/basic_string/cons/wchar_t/1.cc execution test FAIL: 21_strings/basic_string/insert/char/1.cc execution test FAIL: 21_strings/basic_string/insert/wchar_t/1.cc execution test which certainly would appear as if they are all related bugs. Can you try the above check on x86_64 and see how many regressions you have when the linker warnings suppressed? Jack ps Do you want to create a PR for these failures at -m64 or should I?
gcc-4.2-20060826 is now available
Snapshot gcc-4.2-20060826 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.2-20060826/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.2 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 116473 You'll find: gcc-4.2-20060826.tar.bz2 Complete GCC (includes all of below) gcc-core-4.2-20060826.tar.bz2 C front end and core compiler gcc-ada-4.2-20060826.tar.bz2 Ada front end and runtime gcc-fortran-4.2-20060826.tar.bz2 Fortran front end and runtime gcc-g++-4.2-20060826.tar.bz2 C++ front end and runtime gcc-java-4.2-20060826.tar.bz2 Java front end and runtime gcc-objc-4.2-20060826.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.2-20060826.tar.bz2The GCC testsuite Diffs from 4.2-20060819 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.2 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: gcc trunk vs python
Would any of the gcc developers care to drop by the python-dev mailing list and give the author of python an answer? http://mail.python.org/pipermail/python-dev/2006-August/068482.html On 8/26/06, Jack Howarth wrote: >I discovered that gcc 4.2 exposes a flaw with > signed integer overflows in python. This bug and the > necessary fix has been discussed in detail on the gcc > mailing list. I have filed a detailed bug report and > the recommended patch proposed by the gcc developers. > This problem should be addressed BEFORE python 2.5 is > released. The bug report is... > > [ 1545668 ] gcc trunk (4.2) exposes a signed integer overflows > > in the python sourceforge bug tracker. Thanks in advance > for attempting to fix this before Python 2.5 is released. I'm not sure I follow why this isn't considered a regression in GCC. Clearly, on all current hardware, x == -x is also true for the most negative int (0x8000 on a 32-bit box). Why is GCC attempting to break our code (and then blaming us for it!) by using the C standard's weaselwords that signed integer overflow is undefined, despite that it has had a traditional meaning on 2's complement hardware for many decades? If GCC starts to enforce everything that the C standard says is undefined then very few programs will still work... -- --Guido van Rossum (home page: http://www.python.org/~guido/)
Re: VIEW_CONVERT_EXPR vs alias pass
On Sat, 2006-08-26 at 09:48 -0700, Andrew Pinski wrote: > If we have the following IR (before the first may_alias pass): > The may_alias pass removes the TREE_ADDRESSABLE on b so we ICE in the > checking pass after may_alias runs. Does someone have an idea on where > it is going wrong? I have a fix for the removal of TREE_ADDRESSABLE now after figuring out we should be adding the decl to the addressable_taken of the statement during the call of get_expr_operands. -- Pinski
Re: [Python-Dev] gcc 4.2 exposes signed integer overflows
Dan, Thanks for the detailed reply on the python-dev mailing list. I had a feeling we would run into resistance on this (otherwise these issues would have been already fixed). That's why discussions like the "gcc trunk vs python" thread can be useful even though they are off-topic to the list. Once the issue have been throughly discussed for one package, we can always just reference the thread for other package maintainers who have reservations about the necessity of conforming to gcc's strict adherence to standards. Jack
Re: gcc trunk vs python
Jack Howarth wrote: Would any of the gcc developers care to drop by the python-dev mailing list and give the author of python an answer? http://mail.python.org/pipermail/python-dev/2006-August/068482.html *Guido van Rossum wrote: * I'm not sure I follow why this isn't considered a regression in GCC. Clearly, on all current hardware, x == -x is also true for the most negative int (0x8000 on a 32-bit box). Why is GCC attempting to break our code (and then blaming us for it!) by using the C standard's weaselwords that signed integer overflow is undefined, despite that it has had a traditional meaning on 2's complement hardware for many decades? If GCC starts to enforce everything that the C standard says is undefined then very few programs will still work... First, you can always use -fwarpv and retail old behavior. Any code that breaks or suspects breakage by the new behavior may use this flag. Second, consider the following example: Once upon a time int *p; /* No init!!! */ if (*p && 0) *p=0; would not crash (DOS days). One could say "Why should Microsoft or Borland crash our code? Clearly, the value of "p" should never be read or written". This example broke when we had memory protection. Memory protection is a good thing, right? Similarly, the new gcc behavior allows for better optimization. Also, we are told that some boxes, have different codes for signed and unsigned operations, where signed overflows either trap or saturate (IIRC on x86, MMX saturates on overflow) Once I had a similar claim to yours (against this overflow behavior): http://gcc.gnu.org/ml/gcc/2005-06/msg01238.html But after extensive searches, I could not find code that breaks due to this new behavior of overflow. Such code is apparently rare. Michael
Re: gcc trunk vs python
Michael Veksler wrote: First, you can always use -fwarpv and retail old behavior. Any code that ^^ -fwrapv