Re: abt compiler flags
On Thu, Oct 12, 2006 at 10:44:59PM -0700, Mohamed Shafi wrote: > Hello all, > > During regression tests if i want to disable some features like trampolines i > can give -DNO_TRAMPOLINES > as an compiler flag. By default, -DNO_TRAMPOLINES is set in testsuite/lib/gcc.exp if the DejaGnu target info reports that the support doesn't exist. The only other feature treated this way is no_label_values. > Do i have similar flags for profiling and PIC? Not for the user to disable them, although tests for these features check whether the support is available to the target. The procs that check for the support are check_profiling_available and check_effective_target_fpic in testsuite/lib/target-supports.exp. The .exp files for profiling tests call check_profiling_available and skip the directory if that proc returns 0. "fpic" is an effective-target keyword that can be used alone or in combination with other target information to limit when a test is run, e.g. "{ dg-do run { target fpic } }". There's no way to disable tests for these features if the procs in target-supports.exp claim that they are supported. If your target doesn't support them, you can modify those procs to skip the tests. Janis
Re: gcc/testsuite incorrect checking of compiler's retval in dg-compile
On Wed, Oct 18, 2006 at 11:22:40PM +0200, Bernhard Fischer wrote: > Hi, > > I need to check for a non-0 return value in dg-compile testcases in > gcc. The compiler's exit status is only known within code from the DejaGnu product, in proc default_target_compile in DejaGnu's target.exp. If there was any output from the compilation then the exit status isn't passed back to the testsuite support that's under our control. I don't see a clean way to get around this. We can't get to the exit status, but we can check whether or not the compilation's output file (a.out, .o, .i, whatever) was produced. Would it be worthwhile to have a new test directive that says to fail the test if the output file was produced? Janis
Re: dg-error question
On Thu, Oct 26, 2006 at 06:08:01PM +0200, Tobias Burnus wrote: > Hello, > > I have a novice question to dg-error. The testcase if for Fortran, but I > hope to find more dejagnu experts here I'm not an expert, but as testsuite maintainer I need to continue learning more about DejaGnu/Tcl/expect. > I have: > > > gcc/testsuite/gfortran.dg> cat -n namelist_internal2.f90 | grep dg- > 1 ! { dg-do compile } > 2 ! { dg-options "-fall-intrinsics -std=f95" } > 16write(str,nml=nam) ! { dg-error "Internal file at .* is > incompatible with namelist" } > 20read(str,nml=nam) ! { dg-error "Internal file at .* is incompatible > with namelist" } > > > Now, if I run "make -k check" I get (from gfortran.sum): > PASS: gfortran.dg/namelist_internal2.f90 -O (test for errors, line 16) > FAIL: gfortran.dg/namelist_internal2.f90 -O (test for errors, line 20) > PASS: gfortran.dg/namelist_internal2.f90 -O (test for excess errors) > > That is: The first test succeeds, but the second one fails. I haven't verified this, but it looks like the '.*' is matching through the second (or final identical) error message in the output. I tried the version of this test in your patch; that one needs to replace '(1)' with '\\(1\\)', and leave a space before the final '}'. Janis
Re: bootstrap on powerpc fails
On Tue, Nov 07, 2006 at 11:40:01PM +0100, Eric Botcazou wrote: > > But note this is with RTL checking enabled (--enable-checking=rtl). > > Can anyone refresh my memory: why is RTL checking disabled on the mainline? Because it takes a LONG time. Janis
Re: gpl version 3 and gcc
On Wed, Nov 15, 2006 at 01:55:05PM -0800, Ed S. Peschko wrote: > On Wed, Nov 15, 2006 at 02:14:00PM -0500, Andrew Pinski wrote: > > > > > > All, > > > > > > So, again, Is gcc planning on automatically moving to gpl version 3, > > > staying > > > at gpl version 2, or having a protracted discussion? What happens if some > > > developers decide they want to stay at 2 and others decide they want to > > > go with 3? > > > > We (developers/SC) don't have control over this, the FSF has the control. > > When the FSF says move over to GPLv3 we will, no questions. > > All GNU projects which are copyrighted by the FSF will be the same: glibc, > > bintuils, gdb, etc. > > So I gather that the FSF has some sort of property-rights transfer document > that > developers sign in order to make their patches FSF property? If not, how > would > you force compliance with individual patches from various contributors who > want > to stay at version2? Or would you only accept patches from people who are > happy > with version3? GCC developers assign copyright to the Free Software Foundation, which is the copyright holder for all GNU projects; see http://gcc.gnu.org/contribute.html. This discussion is off-topic for this mailing list. Janis
Re: how to test multiple warnings?
On Thu, Nov 30, 2006 at 07:25:47PM +, Manuel López-Ibáñez wrote: > Hi, > > PR19978 reports that some overflow warnings are emitted multiple > times. Like for example, > > test.c:6: warning: integer overflow in expression > test.c:6: warning: integer overflow in expression > test.c:6: warning: integer overflow in expression > > The current testsuite will match any number of those to a single { > dg-warning }. I don't know whether this is a known limitation, a bug > on the testsuite or it just needs some magic. As discussed on IRC, processing of dg-warning and dg-error is done in code that's part of the DejaGnu project, and it matches all regular expressions on the line. > How could I test that exactly one warning was emitted? Here's a way to treat duplicate messages as errors; the first test case fails because it has duplicate messages, the second passes. Janis /* { dg-do compile } */ #include int f (void) { return INT_MAX + 1 - INT_MAX; /* { dg-bogus "integer overflow in expression.*integer overflow in expression" "duplicate" } */ } /* { dg-do compile } */ #include int f (void) { ; /* { dg-bogus "control reaches end.*control reaches end" "duplicate" } */ }
Re: how to test multiple warnings?
On Mon, Dec 04, 2006 at 07:51:00PM +, Manuel López-Ibáñez wrote: > Dear Janis, > I am having problems implementing your proposal. > The following testcase should fail with current mainline for everydg-bogus. > It actually passes perfectly :-(. I have tried removingthe dg-warning > tests but then only the first dg-bogus fails, whilethe other dg-bogus pass. > The results are also unexpected if you removeonly one or two dg-warning. > Any idea of what is going on ? Each check for dg-error, dg-warning, or dg-bogus removes the matched text from the compiler output so it's no longer available to match later tests. You'll need separate tests for the dg-warning checks and the dg-bogus checks. Janis
Re: how to test multiple warnings?
On Tue, Dec 05, 2006 at 11:47:48PM +, Manuel López-Ibáñez wrote: > On 05/12/06, Janis Johnson <[EMAIL PROTECTED]> wrote:> On Mon, Dec 04, > 2006 at 07:51:00PM +, Manuel López-Ibáñez wrote:> > The following > testcase should fail with current mainline for everydg-bogus.> > It > actually passes perfectly :-(. I have tried removingthe dg-warning> > > tests but then only the first dg-bogus fails, whilethe other dg-bogus > pass.> > The results are also unexpected if you removeonly one or two > dg-warning.> > Any idea of what is going on ?>> Each check for dg-error, > dg-warning, or dg-bogus removes the matched> text from the compiler output > so it's no longer available to match> later tests. You'll need separate > tests for the dg-warning checks and> the dg-bogus checks. > Like I said before "I have tried removing the dg-warning tests butthen only > the first dg-bogus fails while the others pass. Does thismean that a single > dg-bogus is matching everything including warningsin other lines (and > functions) ? That sounds so strange... Your mailer is removing white space and line endings, so your mail is getting difficult to understand. The code in DejaGnu that checks the messages for dg-warning, dg-bogus, and friends uses 'regsub -all', so yes, one regular expression can match multiple messages. To see what's going on, install DejaGnu where you can modify it. In share/dejagnu/dg.exp in the install directory look for 'foreach i ${dg-messages}', and uncomment the send_user lines for Before and After messages for the searches; this will let you see exactly what it's doing. If you have questions about installing DejaGnu and using it, ask me in private email. Janis
[RFC] centralizing vector support info in the testsuite
Checks for vector instruction support are spreading throughout the testsuite. I'd like to pull the basic logic into a single place that can be referenced wherever it's needed. What's there now isn't always consistent and there might be new things we can do if the information is presented in a convenient way, so I'd like some advice. The following .exp files currently check whether various targets can generate and run vector instructions and provide command line options for specific targets: gcc.dg/vmx/vmx.exp gcc.dg/vect/vect.exp g++.dg/vect/vect.exp gfortran.dg/vect/vect.exp lib/fortran-torture.exp The following procedures are used only by other procedures: check_effective_target_mpaired_single (for mips) check_alpha_max_hw_available check_vmx_hw_available The following effective targets are used in individual tests as well as within .exp files: powerpc_altivec_ok vect_cmdline_needed (checks i?86, x86_64, ia64) I'd like to pull all of the logic into a single procedure I'll call check_vector_support that will return a list where the first element is 0 no vector support 1 can generate but not run vector instructions 2 can generate and run vector instructions The second element returned is the command line options needed to generate vector instructions for the current effective target. This information can be used in the .exp files that currently do all these checks themselves, and can be used for new target-independent effective targets to replace vmx_hw and powerpc_altivec_ok: vect_gen (ok to generate vector instructions) vect_run (ok to generate and run vector instructions) The existing proc for vect_cmdline_needed would also use the results of check_vector_support and would support all targets. Current handling of i?86, x86_64, and ia64 is confusing. Some places use "-msse2" for all of these, but the procedure check_effective_target_vect_cmdline_needed says no options are needed for ia64 or for LP64 x86_64 and i?86. I seem to recall from long ago that some processors support generating, and possibly running, multiple kinds of vector instructions. If that is the case then check_vector_support could return a list of two (possibly empty) lists of options: options that the target compiler can use to generate vector instructions options for vector instructions that the test system can run In either case, "" is not the same as an empty list, and means that vector instructions are generated by default. The lists could be used in torture lists for testing vector code generation for multiple kinds of vector instructions. Comments? Janis
Re: [RFC] centralizing vector support info in the testsuite
On Mon, Dec 18, 2006 at 01:19:03PM +0200, Dorit Nuzman wrote: > Janis Johnson <[EMAIL PROTECTED]> wrote on 15/12/2006 03:12:44: > > > > I seem to recall from long ago that some processors support generating, > > and possibly running, multiple kinds of vector instructions. > > maybe you're thinking of x86 platforms that support mmx, sse, sse2, sse3? I > think in this case we'd usually want to pick the most powerful/recent > option (since it's usually a superset of previous options). I think if > you're going to have an API that returns a list of options (as you propose > below) maybe we'd want to have on top of that an API that returns the one > most relevant option out of that list (sse3 in the example). For torture - > the API below might be more useful. I don't remember if we have other > targets supporting multiple kinds of vector instructions. > > > If that > > is the case then check_vector_support could return a list of two > > (possibly empty) lists of options: > > > > options that the target compiler can use to generate vector > > instructions > > > > options for vector instructions that the test system can run > > > > In either case, "" is not the same as an empty list, and means that > > vector instructions are generated by default. The lists could be used > > in torture lists for testing vector code generation for multiple kinds > > of vector instructions. I would only support multiple sets of options for a target if it makes sense to do that somewhere. Otherwise the options returned for a target could be the most recent one that is supported by the test compiler, assembler, and hardware. If there is a list returned, then the first set in the list would be the one to use by default. Is there a need in any set of tests to cycle through more that one set of vector options for a target? Janis
Re: question from imobilien
On Wed, Dec 20, 2006 at 03:06:34PM +0100, Jan Eissfeld wrote: > Hi, > > PR19978 reports that some overflow warnings are emitted multiple times. Like > for example, > > test.c:6: warning: integer overflow in expression > test.c:6: warning: integer overflow in expression > test.c:6: warning: integer overflow in expression > > The current testsuite will match any number of those to a single { > dg-warning }. I don't know whether this is a known limitation, a bug > on the testsuite or it just needs some magic. > > How could I test that exactly one warning was emitted? See http://gcc.gnu.org/ml/gcc/2006-12/msg0.html for an ugly solution, and the rest of the thread for problems with this approach. The check for getting the warning must be in a separate test from the check for multiple warnings. Janis
Re: question from imobilien
On Thu, Dec 21, 2006 at 09:43:26PM +, Manuel López-Ibáñez wrote: > On 21/12/06, Janis Johnson <[EMAIL PROTECTED]> wrote: > >On Wed, Dec 20, 2006 at 03:06:34PM +0100, Jan Eissfeld wrote: > >> Hi, > >> > >> PR19978 reports that some overflow warnings are emitted multiple times. > >Like for example, > >> > >> test.c:6: warning: integer overflow in expression > >> test.c:6: warning: integer overflow in expression > >> test.c:6: warning: integer overflow in expression > >> > >> The current testsuite will match any number of those to a single { > >> dg-warning }. I don't know whether this is a known limitation, a bug > >> on the testsuite or it just needs some magic. > >> > >> How could I test that exactly one warning was emitted? > > > >See http://gcc.gnu.org/ml/gcc/2006-12/msg0.html for an ugly solution, > >and the rest of the thread for problems with this approach. The check > >for getting the warning must be in a separate test from the check for > >multiple warnings. > > > >Janis > > > > Or even better, see > http://gcc.gnu.org/ml/gcc-patches/2006-12/msg00588.html which fixes > that PR and adds testcases to prevent regressing on this. > > It is still awaiting for review, though. ;-) I hadn't noticed that it's for the same PR! Janis
Re: running bprob.exp tests in a cross-testing environment
On Thu, Dec 21, 2006 at 10:00:47AM +1100, Ben Elliston wrote: > On Thu, 2006-12-21 at 09:56 +1100, Ben Elliston wrote: > > > After some digging, I managed to work out why: the gcov runtime code > > wants to create the .gcda file in the same directory that the object > > file was created on the build system. Unless the same directory > > structure exists on the target, the gcov runtime code just skips writing > > out the data file on exit. > > To be more precise, the gcov runtime first tries to create the required > path, but this is unlikely to succeed if it requires creating a new > directory under / (which only root can typically do). If it cannot > create the full path before creating the data file, the gcov runtime > code will just silently fail. Ben, you understand what's going on here much better than I do, so if you come up with a patch that works it's pre-approved. Otherwise I can take a look in a couple of weeks. Janis
Re: RFC: Add BID as a configure time option for DFP
On Wed, Jan 10, 2007 at 11:40:46AM -0800, H. J. Lu wrote: > Both AMD and Intel like to have BID as a configure time option > for DFP. Intel is planning to contribute a complete BID runtime > library, which can be used by executables generate by gcc. > > As the first step, we'd like to contribute a BID<->DPD library so that > BID can be used with libdecnumber by executables generate by gcc > before the complete BID runtime library is ready. > > Any comments? libdecnumber doesn't use DPD (densely packed decimal), it uses the decNumber format. Functions in libgcc convert from DPD to decNumber, call into libdecnumber to do computations, and then convert the result back to DPD. It's all parameterized in dfp-bit.[ch], so replacing conversions between decNumber structs and DPD with conversions between decNumber structs and BID (binary integer decimal) should be straightforward; I don't think there's any need to convert between BID and DPD to use libdecnumber. If all x86* targets will use BID then there's no need for a configure option. Initial support using DPD for x86* was a proof of concept, I doubt that anyone would care if you replace it with BID support. Janis
Re: 2007 GCC Developers Summit
On Wed, Jan 24, 2007 at 04:10:18PM -0500, Andrew J. Hutton wrote: > We would like to invite everyone to read over the Call for Papers for > the 2007 GCC Developers' Summit located at > http://www.gccsummit.org/2007/cfp.php and to consider submitting a > proposal for this year. > > This year we're going to be from July 18th to 20th for a change and hope > that you're all able to make it this year. > > Please forward the CFP URL to anyone you feel would be interested in > attending. Also think about what kinds of presentations you'd like to hear and encourage the appropriate people to submit proposals about those topics. Janis
Re: (OffTopic) trouble registering on www.gccsummit.org
On Thu, Feb 01, 2007 at 11:10:52AM +0100, Basile STARYNKEVITCH wrote: > > Hello All, > > Sorry for this off-topic message, but I have some troubles registering on > https://www.gccsummit.org/2007/login.php and my email to > [EMAIL PROTECTED] bounced. > > My own email is [EMAIL PROTECTED] > > Does anyone know who should I contact about the gccsummit.org web site or > registration system? > > Regards, and apologies for this slightly off topic message! I hope someone > in charge of the gccsummit.org site would read it. Mail [EMAIL PROTECTED] Janis
Re: Error in checking compat.exp
On Tue, Mar 13, 2007 at 09:13:14AM +0200, Revital1 Eres wrote: > > Hello, > > I get the following error while running > make check-gcc RUNTESTFLAGS="compat.exp" > with mainline gcc version 4.3.0 20070312 > on PPC. > ERROR: tcl error sourcing > /home/eres/mve_mainline_zero_12_3/gcc/gcc/testsuite/g++.dg/compat/compat.exp. > ERROR: couldn't open > "/home/eres/mve_xline_zero_12_3/gcc/gcc/testsuite/g++.dg/compat/abi/bitfield1_main.C": > no such file or directory I assume that /home/eres/mve_mainline_zero_12_3/gcc is your source tree; does the file exist? If not, I don't know what could be wrong but can investigate. Janis
Re: XFAILing gcc.c-torture/execute/mayalias-2.c -O3 -g (PR 28834)
On Tue, Mar 13, 2007 at 12:28:22PM -0700, Kazu Hirata wrote: > Hi Janis, > > While PR 28834 stays open, I'm thinking about XFAILing > gcc.c-torture/execute/mayalias-2.c when it is run with -O3 -g. > However, I am not having any luck with writing mayalias-2.x. I am > wondering if you could help me with XFAIL. > > When I try mayalias-2.x like so: > > set torture_eval_before_execute { > global compiler_conditional_xfail_data > set compiler_conditional_xfail_data { > "PR 28834" \ > { "*-*-*" } \ > { "-O3" } \ > { "" } > } > } > return 0 > > I get > > XPASS: gcc.c-torture/execute/mayalias-2.c execution, -O3 > -fomit-frame-pointer > FAIL: gcc.c-torture/execute/mayalias-2.c compilation, -O3 -g (internal > compiler error) > > That is, I am getting an unintended XPASS for > -O3 fomit-frame-pointer. Also, the "-O3 -g" one doesn't show XFAIL > even though the options do contain -O3. > > How do I make gcc.c-torture/execute/mayalias-2.c XFAIL on -O3 -g? You want the XFAIL to apply to compilation, not execution, and only for "-O3 -g", not for all uses of -O3. This one works (surprisingly, because as Andrew said it's usually not possible to XFAIL an ICE). set torture_eval_before_compile { set compiler_conditional_xfail_data { "PR 28834" { "*-*-*" } { "-O3 -g" } { "" } } } return 0 Janis
Re: Error in checking compat.exp
On Tue, Mar 13, 2007 at 02:22:06PM -0700, Jim Wilson wrote: > Revital1 Eres wrote: > >ERROR: tcl error sourcing > >/home/eres/mve_mainline_zero_12_3/gcc/gcc/testsuite/g++.dg/compat/compat.exp. > >ERROR: couldn't open > >"/home/eres/mve_xline_zero_12_3/gcc/gcc/testsuite/g++.dg/compat/abi/bitfield1_main.C": > > Note that mainline got changed to xline. Also note that the directory > has files bitfield_main.C, bitfield_x.C, and bitfield_y.C. > > So it looks like there is a tcl script somewhere to replace "main" with > "x", which fails if the directory path contains "main" anywhere in it > other than in the filename at the end. That is indeed the problem; testsuite/lib/compat.exp contains # Set up the names of the other source files. regsub "_main.*" $src1 "" base regsub ".*/" $base "" base regsub "_main" $src1 "_x" src2 regsub "_main" $src1 "_y" src3 I'll find a way to fix that. Janis
Re: Error in checking compat.exp
On Tue, Mar 13, 2007 at 02:07:02PM -0800, Janis Johnson wrote: > On Tue, Mar 13, 2007 at 02:22:06PM -0700, Jim Wilson wrote: > > Revital1 Eres wrote: > > >ERROR: tcl error sourcing > > >/home/eres/mve_mainline_zero_12_3/gcc/gcc/testsuite/g++.dg/compat/compat.exp. > > >ERROR: couldn't open > > >"/home/eres/mve_xline_zero_12_3/gcc/gcc/testsuite/g++.dg/compat/abi/bitfield1_main.C": > > > > Note that mainline got changed to xline. Also note that the directory > > has files bitfield_main.C, bitfield_x.C, and bitfield_y.C. > > > > So it looks like there is a tcl script somewhere to replace "main" with > > "x", which fails if the directory path contains "main" anywhere in it > > other than in the filename at the end. > > That is indeed the problem; testsuite/lib/compat.exp contains > > # Set up the names of the other source files. > regsub "_main.*" $src1 "" base > regsub ".*/" $base "" base > regsub "_main" $src1 "_x" src2 > regsub "_main" $src1 "_y" src3 > > I'll find a way to fix that. Revital, please try this. I've tested it but know better than to check things in at the end of the day; I'll post it tomorrow. Index: gcc/testsuite/lib/compat.exp === --- gcc/testsuite/lib/compat.exp(revision 122875) +++ gcc/testsuite/lib/compat.exp(working copy) @@ -259,10 +259,13 @@ } # Set up the names of the other source files. -regsub "_main.*" $src1 "" base -regsub ".*/" $base "" base -regsub "_main" $src1 "_x" src2 -regsub "_main" $src1 "_y" src3 +set dir [file dirname $src1] +set ext [file extension $src1] +set base [file rootname $src1] +set base [string range $base [string length $dir] end] +regsub "_main" $base "" base +set src2 "${dir}/${base}_x${ext}" +set src3 "${dir}/${base}_y${ext}" # Use the dg-options mechanism to specify extra flags for this test. # The extra flags in each file are used to compile that file, and the
Re: XFAILing gcc.c-torture/execute/mayalias-2.c -O3 -g (PR 28834)
On Wed, Mar 14, 2007 at 03:47:57AM +, Joseph S. Myers wrote: > On Tue, 13 Mar 2007, Andrew Pinski wrote: > > > Anyways the best way to fix this is just to fix the bug. Someone > > We should have 0 unexpected FAILs in 4.2.0 on common platforms (in > particular the primary release criteria ones for the testsuites of the > languages in the release criteria). How this is achieved is secondary, > but if the bug isn't fixed for 4.2.0 the test should be XFAILed - and we > know from experience that many regressions aren't fixed for releases, > especially where they were present in many previous releases. > > > exposed the regression back in 4.0 time frame, I reported the bug > > before getting approval for the patch. They were not willing to fix > > it so why punish the testcase which is obviously is a regression. > > It's not punishing the testcase; it's recognising that we have a bug > tracking system to track regressions and having "expected unexpected > FAILs" is helpful neither to users wishing to know if their compiler built > as expected nor to developers glancing over test results to see if they > seem OK. I've come to agree with that point of view and I'll look into allowing XFAIL for tests that ICE. Torture tests are handled differently, though, and this particular test can be XFAILed with the example .x file I sent earlier. Janis
Re: XFAILing gcc.c-torture/execute/mayalias-2.c -O3 -g (PR 28834)
On Thu, Mar 15, 2007 at 04:58:51AM -0400, Hans-Peter Nilsson wrote: > On Wed, 14 Mar 2007, Joe Buck wrote: > > If we allow XFAILing tests that ICE, it should be an extremely rare thing. > > I worry that once the precedent is set, the number of XFAIL ICEs will > > go up with time, making it more likely that users will experience > > compiler crashes. > > What's so bad about an ICE compared to e.g. wrong-code? > The latter is IMNSHO much much worse. > Is it just the technical matter of xfailing it or is there a > *logical* reason that I've missed in this discussion and elsewere? The reason for not supporting XFAIL for a ICE is that a test that was already XFAIL for failing to compile didn't report a new ICE. It made sense at the time. Janis
Re: Problem with building libgfortran on PPC
On Sun, Mar 18, 2007 at 09:07:32AM -0700, Andrew Pinski wrote: > On 3/18/07, Victor Kaplansky <[EMAIL PROTECTED]> wrote: > > > >I have obtained the same error on my ppc64 yellow dog linux: > > >collect2: ld terminated with signal 11 [Segmentation fault] > > > >> I get the following error on PPC while bootstrapping mainline. > >> Re-runing make I get: > >> collect2: ld terminated with signal 11 [Segmentation fault] > >> make[8]: *** [libstdc++.la] Error 1 > > Usually that means there is a bug in binutil's ld. It might be better > to use a real FSF stable release of binutils instead of what the > vendor (distro) provides you with. I've been using binutils 2.17 on various distributions of powerpc64-linux and have had no problem with it. Janis
Re: GCC 4.2.0 Status Report (2007-04-15)
On Mon, Apr 16, 2007 at 06:36:07PM +0200, Steven Bosscher wrote: > * Very few people know how to use Janis' scripts, so to encourage > people to use them, the release manager could write a wiki page with a > HOWTO for these scripts (or ask someone to do it). Regression hunting > should only be easier now, with SVN's atomic commits. But the easier > and more accessible you make it for people to use the available tools, > the harder it gets for people to justify ignoring their bugs to "the > rest of us". The RM can encourage me to do this; I've already been meaning to for a long time now. My reghunt scripts have grown into a system that works well for me, but I'd like to clean them up and document them so that others can use them. What I've got now is very different from what I used with CVS. I'd like at least two volunteers to help me with this cleanup and documentation effort by using my current scripts on regressions for open PRs and finding the places that are specific to my environment. I can either put what I've got now into contrib/reghunt, or send a tarball to the mailing list for people to use and check things in after they're generally usable. One silly thing holding me back is not quite knowing what needs copyrights and license notices and what doesn't. Some scripts are large and slightly clever, others are short and obvious. Janis
Re: GCC 4.2.0 Status Report (2007-04-15)
On Mon, Apr 16, 2007 at 10:58:13AM -0700, Mark Mitchell wrote: > Janis Johnson wrote: > > On Mon, Apr 16, 2007 at 06:36:07PM +0200, Steven Bosscher wrote: > >> * Very few people know how to use Janis' scripts, so to encourage > >> people to use them, the release manager could write a wiki page with a > >> HOWTO for these scripts (or ask someone to do it). Regression hunting > >> should only be easier now, with SVN's atomic commits. But the easier > >> and more accessible you make it for people to use the available tools, > >> the harder it gets for people to justify ignoring their bugs to "the > >> rest of us". > > > > The RM can encourage me to do this; I've already been meaning to for a > > long time now. > > You may certainly consider yourself encouraged. :-) Gosh, thanks! > > One silly thing holding me back is not quite knowing what needs > > copyrights and license notices and what doesn't. Some scripts are > > large and slightly clever, others are short and obvious. > > For safety sake, we should probably get assignments on them. I'm not > sure how hard it is to get IBM to bless contributing the scripts. If > it's difficult, but IBM doesn't mind them being made public, perhaps we > could just put them somewhere on gcc.gnu.org, outside of the official > subversion tree. I have IBM permission to contribute them to GCC. An earlier version for CVS is in contrib/reghunt with formal FSF copyright and GPL statements. I've sent later versions to gcc-patches as a way to get them to particular people who wanted to try them out. My inclination is to put full copyright/license statements on the bigger ones and just "Copyright FSF " on the small ones. Janis
regression hunt tools
*** Warning: Your file, reghunt-20070417.tar.bz2, contains more than 32 files after decompression and cannot be scanned. *** Here's a set of my current regression hunt tools, along with a set of example configuration files and test scripts and a README file that is more of a brain dump than adequate documentation. I'm looking for a few brave souls to try these out on open problem reports that are known to be regressions and then provide feedback to make the tools more usable for people other than just me. The goal is to add them to contrib/reghunt in the GCC sources. Comments in the files say they are copyright FSF and covered by GPL v2 or later. Enjoy! Janis reghunt-20070417.tar.bz2 Description: BZip2 compressed data
Re: CompileFarm and reghunt Was: GCC 4.2.0 Status Report (2007-04-15)
On Mon, Apr 16, 2007 at 10:09:35PM +0200, Laurent GUERBY wrote: > On Mon, 2007-04-16 at 12:00 -0600, Tom Tromey wrote: > > I wonder whether there is a role for the gcc compile farm in this? > > For instance perhaps "someone" could keep a set of builds there and > > provide folks with a simple way to regression-test ... like a shell > > script that takes a .i file, ssh's to the farm, and does a reghunt... ? > > > > I think this would only be worthwhile if the farm has enough disk, and > > if regression hunting is a fairly common activity. > > > > Tom > > We're a bit "short" on the current CompileFarm machines, > we have 5x16GB + 4x32GB (and as shown below it tends to > be used, I have to ping users from time to time to get GB > back :). > > There is enough cpu power in the farm to build and check a version for > each commit (all languages including Ada) on up to two branches (I sent > a message a while ago about that) with a latency of about 8 hours IIRC. > > We might be able to store only part of the compiler, or if this > proves really useful, I could just add a storage unit to the > farm with cheap & large current generation disks (machines are > unfortunately SCSI based). > > As announced a few weeks ago, all official releases are already > installed on the CompileFarm (/n/b01/guerby/release/X.Y.Z/bin with X.Y.Z > in 3.4.6, 4.0.0-4, 4.1.0-2). Regression hunts using saved binaries are useful for some cases but not all. They are useful, for example, for narrowing down the range to search for a front end bug. They are not useful for finding bugs for a specific target, other than the ones for which binaries are saved, or for finding wrong-code bugs, or bugs in functionality that is not enabled by default. Unless a compiler is saved for every revision, determining the patch at fault requires either time from an experienced person looking through ChangeLog entries or else an automated hunt for the smaller time period, but since the hunt uses a binary search, it doesn't take that much longer to search for a period of months than it does to search for patches applied in a single day. About one-fourth of the regression hunts that I've run have required some kind of manual intervention because the test wasn't set up correctly, a build failed, or something unexpected happened when the test was run with one of the compilers within the range. No matter how automated the hunts are, they always need at least a small amount of individual attention. A regression hunt setup requires a local copy of the GCC Subversion repository, which takes LOTS of space. If one of the CompileFarm systems already has a copy of the repository, or can get more disk space to handle one, then it would be great to have a regression hunt setup on it as well. The official releases of GCC installed on those machines would be very useful in narrowing down ranges (and for reporting information about regressions in general). It would also help to keep a mainline build from every few months. I've posted my current reghunt tools in separate mail; see http://gcc.gnu.org/ml/gcc/2007-04/msg00635.html. Janis
Re: GCC mini-summit - compiling for a particular architecture
On Sun, Apr 22, 2007 at 04:39:23PM -0700, Joe Buck wrote: > > On Sun, 2007-04-22 at 14:44 +0200, Richard Guenther wrote: > > > At work we use -O3 since it gives 5% performance gain against -O2. > > > profile-feedback has many flags and there is no overview of it in the > > > doc IIRC. Who will use it except GCC developpers? Who knows about your > > > advice? > > On Sun, Apr 22, 2007 at 03:22:56PM +0200, Jan Hubicka wrote: > > Well, this is why -fprofile-generate and -fprofile-use was invented. > > Perhaps docs can be improved so people actually discover it. Do you > > have any suggestions? > > Docs could be improved, but this also might be a case where a tutorial > would be needed, to teach users how to use it effectively. > > > (Perhaps a chapther for FDO or subchapter of gcov docs would do?) We could also have examples, with lots of comments, in the testsuite, with references to them in the docs. That way there is code that people can try to see what kind of effect an optimization has on their system. This would also, of course, provide at least minimal testing for more optimizations; last time I looked there were lots of them that are never used in the testsuite. Janis
Re: testsuite execution question
On Fri, Feb 25, 2005 at 08:14:04PM -0800, Steve Kargl wrote: > I would like to write a short program to test the > command line parsing of gfortran. I know I can add > > ! {dg-do run} > > at the top of the program to have dejagnu execute the > the a.out file. But, I want to execute "a.out 1 2 3". > Is this possible? I tried looking through gcc.dg and > gfortran.dg directories, but nothing jumped out as the > obvious way to do want I need. > > If you're wondering the test program would look like > > ! { dg-do run } > ! { dg?? } How to specify "a.out 1 2 3"? > program args > integer i > i = iargc() > if (i /= 3) call abort > end program DejaGnu's definition of ${tool}_load has an optional argument for flags to pass to the test program, but none of the procedures in DejaGnu or in gcc/testsuite/* are set up to pass such flags. It would be fairly straightforward to provide a local version of gfortran_load to intercept calls to the global one, and have it add flags specified with a new test directive to the DejaGnu version of ${tool}_load. That directive could be something like: { dg-program-options options [{ target selector }] } Would something like this be useful for other languages as well, or is Fortran the only one in GCC that has support to process a program's command line? I'm willing to implement something like this if it looks worthwhile. Janis
Re: testsuite execution question
On Mon, Feb 28, 2005 at 03:59:52PM -0800, Janis Johnson wrote: > On Fri, Feb 25, 2005 at 08:14:04PM -0800, Steve Kargl wrote: > > I would like to write a short program to test the > > command line parsing of gfortran. I know I can add > > > > ! {dg-do run} > > > > at the top of the program to have dejagnu execute the > > the a.out file. But, I want to execute "a.out 1 2 3". > > Is this possible? I tried looking through gcc.dg and > > gfortran.dg directories, but nothing jumped out as the > > obvious way to do want I need. > > > > If you're wondering the test program would look like > > > > ! { dg-do run } > > ! { dg?? } How to specify "a.out 1 2 3"? > > program args > > integer i > > i = iargc() > > if (i /= 3) call abort > > end program > > DejaGnu's definition of ${tool}_load has an optional argument for flags > to pass to the test program, but none of the procedures in DejaGnu or in > gcc/testsuite/* are set up to pass such flags. It would be fairly > straightforward to provide a local version of gfortran_load to intercept > calls to the global one, and have it add flags specified with a new test > directive to the DejaGnu version of ${tool}_load. That directive could > be something like: > > { dg-program-options options [{ target selector }] } > > Would something like this be useful for other languages as well, or is > Fortran the only one in GCC that has support to process a program's > command line? > > I'm willing to implement something like this if it looks worthwhile. It's supposed to be possible to drop in replacements to DejaGnu in the GCC testsuite; do other test frameworks of interest handle passing arguments to the test program in a way that could support this? (Sorry for talking to myself here.) Janis
Re: No way to scan-tree-dump .i01.cgraph?
On Mon, Feb 28, 2005 at 10:23:56AM -0700, Jeffrey A Law wrote: > On Mon, 2005-02-28 at 17:08 +0100, Richard Guenther wrote: > > Hi! > > > > It seems the current dg infrastructure does not support scanning > > tree-dumps dumped via -fdump-ipa-XXX because they are labeled > > differently. I worked around this by replacing > > > > set output_file "[glob [file tail $testcase].t??.[lindex $args 1]]" > > > > with > > > > set output_file "[glob [file tail $testcase].???.[lindex $args 1]]" > > > > but I'm not sure if this is the right way. > It's as good as any. If you wanted to solve an even bigger problem, > find a clean way that we can delete the bloody files. I got lost in > the maze of tcl/expect code when I tried :( I also find it annoying that the dump files aren't cleaned up. Should the dump files for failing tests be left, or would it be OK to remove all of them? > > Also I need to do more complex matching like the number X in line > > matching PATTERN should be the same as Y in line matching PATTERN2. > > Is there a way to do this with dg? Or is it better to output > > an extra line to the dump file during compile for the condition > > I want to check? > I'm not immediately aware of a way to do this. One of the major > limitations of the framework is the inability to do anything other > than scan for simple patterns and count how often the pattern > occurs. > jeff Adding extra lines in dump files for use by tests seems good to me, as long as there are comments in the code explaining what it's for so it doesn't change. Janis
Re: testsuite execution question
On Mon, Feb 28, 2005 at 08:45:17PM -0500, Daniel Jacobowitz wrote: > On Mon, Feb 28, 2005 at 04:14:12PM -0800, Janis Johnson wrote: > > > DejaGnu's definition of ${tool}_load has an optional argument for flags > > > to pass to the test program, but none of the procedures in DejaGnu or in > > > gcc/testsuite/* are set up to pass such flags. It would be fairly > > > straightforward to provide a local version of gfortran_load to intercept > > > calls to the global one, and have it add flags specified with a new test > > > directive to the DejaGnu version of ${tool}_load. That directive could > > > be something like: > > > > > > { dg-program-options options [{ target selector }] } > > > > > > Would something like this be useful for other languages as well, or is > > > Fortran the only one in GCC that has support to process a program's > > > command line? > > > > > > I'm willing to implement something like this if it looks worthwhile. > > > > It's supposed to be possible to drop in replacements to DejaGnu in the > > GCC testsuite; do other test frameworks of interest handle passing > > arguments to the test program in a way that could support this? (Sorry > > for talking to myself here.) > > I don't think that's the concern here - it's more a matter of whether > the target, and DejaGNU, support this. Lots of embedded targets seem > to have trouble with it. Take a look at "noargs" in the DejaGNU board > files for a couple of examples, IIRC. GDB jumps through some hoops to > test this, and gets it wrong in a bunch of places too. Is command line processing relevant for embedded targets? (I have no idea.) Tests that pass options to the test program could be skipped for embedded targets and for other kinds of testing where it isn't reliable. The dg-program-options directive could warn when it's used in an environment for which it's not supported. Janis
Re: No way to scan-tree-dump .i01.cgraph?
On Tue, Mar 01, 2005 at 01:29:48PM -0500, Andrew Pinski wrote: > > On Mar 1, 2005, at 1:25 PM, Janis Johnson wrote: > > >I also find it annoying that the dump files aren't cleaned up. Should > >the dump files for failing tests be left, or would it be OK to remove > >all of them? > > I find it even more annoying as on targets which uses case insensitive > storing > of files, causes some of the C++ testcases to fail because the file > names > will only differ in case. (Powerpc-darwin by default uses HFS which is > case > insensitive). So fix the names of the test files so the names of generated files won't conflict. In this case it's a problem because the generated files aren't removed, but there are also tests that fail only when two sets of tests happen to run at the same time. Janis
Re: No way to scan-tree-dump .i01.cgraph?
On Wed, Mar 02, 2005 at 11:41:13AM -0700, Jeffrey A Law wrote: > On Tue, 2005-03-01 at 14:09 -0500, Diego Novillo wrote: > > Janis Johnson wrote: > > > > > I also find it annoying that the dump files aren't cleaned up. Should > > > the dump files for failing tests be left, or would it be OK to remove > > > all of them? > > > > > Much as I don't use the failing executables left behind by the > > testsuite, I wouldn't use the dump files. They can be easily recreated. > > > > But, I can see valid reasons to wanting dump files for failing tests be > > left behind. The dump files for successful should be removed, though. > > The problem with leaving failed dump files behind is that they can > interfere with a following run of the testsuite (particularly if a pass > is added/subtracted). I would vote strongly that the dump files for > failing tests be removed. I'm working on procs to be used in dg-final directives as: { dg-final { cleanup-tree-dump "suffix" } } { dg-final { cleanup-saved-temps } } { dg-final { cleanup-coverage-files } } These are for use in each test that generates files that are currently left cluttering up the build's gcc/testsuite directory. I've also got changes to a couple hundred tests to use these new test directives. Each proc removes files that were generated for the current test. Tests that generate extra files already use dg-options to request those files, so adding another test directive to clean them up doesn't seem like an unreasonable burden. Janis
Re: testsuite execution question
On Tue, Mar 01, 2005 at 04:35:54PM -0500, Daniel Jacobowitz wrote: > On Tue, Mar 01, 2005 at 10:29:45AM -0800, Janis Johnson wrote: > > Is command line processing relevant for embedded targets? (I have no > > idea.) Tests that pass options to the test program could be skipped > > for embedded targets and for other kinds of testing where it isn't > > reliable. The dg-program-options directive could warn when it's used > > in an environment for which it's not supported. > > Sounds good to me, at least in theory. Any ideas on how the testsuite can decide for which targets it supports command line arguments? Would it be reasonable to support them if the target is not remote? Janis
Re: documentation on writing testcases?
On Fri, Mar 11, 2005 at 11:52:25AM +, Joseph S. Myers wrote: > On Fri, 11 Mar 2005, Per Bothner wrote: > > > So the immediate question is: how should the testcase be fixed? > > Specify a line number in the second dg-error to tell dejagnu what line to > expect the error on. > > { dg-error "expected regexp" "test name" { target *-*-* } line-number } > or > { dg-error "expected regexp" "test name" { xfail *-*-* } line-number } > > > The general frustration is: where is dg-error documented? > > It ought to be in the dejagnu manual (i.e., that's where documentation > should best be contributed) since dg-error is part of base dejagnu. There's some information about test directives in the GCC Internals manual in the Testsuites section. I couldn't find much about these things in the DejaGnu documentation. Janis
Re: Non-bootstrap build status reports
On Sat, Mar 12, 2005 at 11:55:03PM -0600, Aaron W. LaFramboise wrote: > Is there a reason why non-bootstrap build status reports are not > archived? For example, for the many targets that are only used in cross > configurations, it would be nice to see if they are working. First off, let me apologize for being way being on updating the build status reports. I'm willing to add entries for cross builds as long as there is information in the report about the build, host, and target triples. If the build report mentions necessary tweaks, the entry notes that. > Also, it might be nice to have a record of negative build reports. For > instance, the build status page might have section for negative builds > listing reports of failed builds that might serve as a quick means to > determine the health of a broken port. Bug reports are better, although I'm willing to consider this. Janis
Re: Strange build errors compiling SPEC with mainline
On Fri, Mar 18, 2005 at 03:02:53PM +0100, Michael Matz wrote: > Hi, > > On Fri, 18 Mar 2005, Diego Novillo wrote: > > > Starting around 2005-03-17, I haven't been able to compile > > several SPEC tests with mainline. Has there been any change in > > the pre-processor that might explain these errors? > > > > I'm pretty sure my installation is correct because this worked > > until 2005-03-15, the system header files are all there and I get > > no such errors from the runs with tree-cleanup-branch (merged > > 2005-02-23). > > > > Any ideas? > > > > Thanks. Diego. > > > > - > > /home/cygnus/dnovillo/perf/sbox/gcc/local.i686/inst.tobiano/bin/gcc -c -o > > bits.o-O3 -march=i686bits.c > > Error from make 'specmake build 2> make.err | tee make.out': > > In file included from gzip.h:37, > > from bits.c:55: > > /usr/include/stdio.h:34:21: error: stddef.h: No such file or directory > > stddef.h is a header installed by GCC into > lib/gcc//4.1.0/include/stddef.h If it can't be found it means that > it's not installed there, which might be due to Zacks changes. You should > look if you have a 'const' directory instead of the 4.1.0 one. If yes, > then this is the problem, and Zacks latest patches fixes it. I haven't yet tried the patch but I get similar errors on powerpc64-linux compiling a hello world program with mainline for the last two days, and the directories are named 'const'. Janis
Re: dejagnu help needed - tests get confused by column numbers
On Sun, Mar 27, 2005 at 01:07:09PM -0800, Mike Stump wrote: > On Sunday, March 27, 2005, at 11:58 AM, Per Bothner wrote: > >Now I'm willing to fix those tests by adding -fno-show-column where > >necessary > > Ick. I favor adding it unconditionally to compile lines over this. > See -fmessage-length code (gcc/testsuite/lib/g++.exp) for hints. And > even that, I'm not sure I favor. For the short term, we can add it, as > otherwise I suspect we'd create a requirement of a new dejagnu release > and to use it, which I favor even less. dejagnu should also be > enhanced to handle column numbers, even if we put in code to add the > option. There are several workarounds in the GCC testsuite for things that could be better handled in DejaGnu if the next release of GCC could require a new DejaGnu version. That might be the right thing to do for GCC 4.1. Ben Elliston is the DejaGnu maintainer; he's expressed interest in the past in modifying DejaGnu to support GCC's needs. Janis
Re: Obsoleting c4x last minute for 4.0
On Thu, Apr 07, 2005 at 11:20:46PM +0200, Björn Haase wrote: > > The reason why I have stopped posting the test results is that we are > currently having 481 failures for the AVR target and the existing real bugs > are completely hidden behind the huge number of failures due to issues like > "test needs trampolines but does not communicate it" or "test case assumes > int to be 32 bit". > IMHO regularly posting the same huge bug list is was not useful at all unless > one could distinguish between *real* and *pseudo* failures. > > I had started to adapt the testsuite by adding functionality for > communicating > that a test case asssumes int to be 32 bit and by means to switch of all > tests that require trampolines. > > Unfortunately, I did not get any response to the patch I had posted to > gcc-patches a couple of months ago implementing additional effective target > keywords :-(. A useful reworking of dozens of the affected test cases > requires that new effective targets are present and that their names are > agreed upon. Since I did not get any response on it, I did refrain to > continue to work on testsuite adaptions so far. I should have done that, I must have missed seeing your patch. I'll look for it now in the archives. Janis
Re: "make bootstrap" for cross builds
On Fri, Apr 15, 2005 at 01:23:39AM -0400, Andrew Pinski wrote: > > On Apr 15, 2005, at 1:19 AM, Ranjit Mathew wrote: > > >Hi, > > > > I think "make bootstrap" does not make sense for > >cross builds. We however seem to allow it but > >fail in a weird way later on (as on mainline). > >I think this should not be allowed. > > > >I discovered this when I mistakenly typed > >"make bootstrap" out of habit on a cross build. > > Huh? there is one case where this does makes sense, take for example: > ppc64-linux-gnu, you were running the ppc-linux-gnu compiled GCC and > you need > a 64bit compatible one. You can compile with --with-cpu=default32 and > still > have a "cross" compiler but can still do a native compiling, it is > weird case > but it does show up. I always set build, host, and target to powerpc64-linux when bootstrapping a biarch compiler, and use --with-cpu=default32 to generate 32-bit binaries by default. I seem to recall running into problems when they weren't all the same for bootstraps. Janis
Re: Build of GCC 4.0.0 successful
On Fri, Apr 22, 2005 at 02:36:38PM -0400, William Beebe wrote: > I've bootstrap built GCC 4.0.0 on Fedora Core 3. > > [EMAIL PROTECTED] ~]$ gcc -v > Using built-in specs. > Target: athlon-fedora-linux What is the output of config.guess?
Re: Ada test suite
On Thu, Apr 28, 2005 at 01:05:29PM +0200, Laurent GUERBY wrote: > On Thu, 2005-04-28 at 09:45 +0200, Florian Weimer wrote: > > Some time ago, someone posted a patch which provided beginnings of a > > general-purpose Ada test suite infrastructure (in addition to the > > current ACATS tests, which cannot be used for regression tests). The > > patch was not integrated, and I can't find it at the moment. 8-( > > > > Does anybody know which patch I'm talking about? > > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18692 > http://gcc.gnu.org/ml/gcc-patches/2004-11/msg01862.html > > Plus this: > > [Ada] Run ACATS tests through an expect script > http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02484.html > http://gcc.gnu.org/ml/gcc-patches/2004-12/msg00166.html > > If Arnaud doesn't feel knowledgeable enough to review/approve > dejagnu code, why don't we name Jim maintainer for this? > > That would at least avoid to have infrastructure patch stuck > for five monthes without review :). > > Laurent > > PS: I know nothing about dejagnu either. I'll look at the DejaGnu aspects of the patch and comment on them, but someone involved with Ada should maintain it. Janis
Re: GCC 3.4.4 RC2
On Sun, May 15, 2005 at 08:59:48AM -0700, Mark Mitchell wrote: > Joseph S. Myers wrote: > > >It also looks like this patch has been backported to 3.4 branch but not to > >4.0 branch? Because 4.0 branch builds are still creating > >libstdc++-abi.sum, while 3.4 branch builds no longer do, the ABI tests > >having been subsumed in the main libstdc++.sum for mainline and 3.4 > >branch. > > Yes, I asked Janis to test each branch separately, because the patches > were separate. She has confirmed that the 4.0 version of the patch > works OK. So, that patch will go on 4.0 today, along with the > additional patch Andreas found. I hadn't noticed originally but on powerpc64-linux with 3.4.4 RC2 and with the 3.4 branch, the results for libstdc++-v3 show only one run of the tests for "unix", not two for "unix/-m32" and "unix/-m64", and the results are actually for check-abi. The leftover temp files in the build directory show that the library tests were actually run, just not reported. I can't tell if they were run for both -m32 and -m64. On the 4.0 branch, check-abi is not being run (or not reported?) but the libstdc++ tests are being run and reported for -m32 and -m64 as expected. I'm very sorry I didn't notice this earlier. Janis
Re: Need help creating a small test case for g++ 4.0.0 bug
On Sat, May 14, 2005 at 12:16:54PM +1000, Paul C. Leopardi wrote: > Hi all, > I originally posted these messages to gcc-help, but had no reply, so I am > re-posting links to them here. > > I think I have found a bug in g++ 4.0.0, but need help in reporting it. > Maintainers like their bug reports to include short test cases, but I don't > know how to generate a short test case involving inlining. I discovered the > original problem by compiling GluCat ( http://glucat.sf.net ) and the > preprocessor output from a short GluCat test program contains over 66 000 > lines of libstdc++, uBLAS and Glucat code. > > Can anyone help, or should I just file a bug report using the huge test case? The information in http://gcc.gnu.org/bugs/minimize.html might help. Janis
Re: updating /testsuite/gcc.misc-tests
On Mon, May 16, 2005 at 03:18:28PM -0700, Zack Weinberg wrote: > > No, the instability in test names is a minor price to pay for having > less custom Tcl cruft. > > You want to talk to Janis Johnson <[EMAIL PROTECTED]>, she's the > testsuite maintainer these days. Yes, feel free to send questions to this list or, if you prefer, to contact me directly about getting started. Janis Johnson IBM Linux Technology Center
powerpc64-linux bootstrap failure
Mainline bootstrap fails on powerpc64-linux with: /home/gccbuild/gcc_mline_anoncvs/gcc/libjava/jni.cc: In function 'void* _Jv_LookupJNIMethod(java::lang::Class*, _Jv_Utf8Const*, _Jv_Utf8Const*, int)': /home/gccbuild/gcc_mline_anoncvs/gcc/libjava/jni.cc:2141: error: Statement marked for throw, but doesn't. # VUSE ; D.27155_71 = D.15057; /home/gccbuild/gcc_mline_anoncvs/gcc/libjava/jni.cc:2141: internal compiler error: verify_stmts failed. Please submit a full bug report, with preprocessed source if appropriate. See http://gcc.gnu.org/bugs.html> for instructions. A regression hunt (automatically kicked off for a bootstrap failure!) identifies the following patch from hubicka: http://gcc.gnu.org/ml/gcc-cvs/2005-05/msg00805.html Janis
Re: Any docs about gcov impl change from 3.3 to 3.4
On Thu, Jun 09, 2005 at 05:39:08PM +0800, Fei, Fei wrote: > Also I want to have some document about format of foo.gcda and for.gcno > files. See the comments in gcc/gcov-io.h. Janis
Re: Use of check_vect() in vectorizer testsuite
On Thu, Jun 09, 2005 at 08:29:21AM -0700, Devang Patel wrote: > > On Jun 9, 2005, at 8:24 AM, Giovanni Bajo wrote: > > >So, the point is that you cannot select between compile-time/run- > >time based > >on a target triplet check, at least for this target. What do you > >suggest? > >All the other tests use check_vect() exactly for this reason, as > >far as I > >can see, so it looks to me that the sensible thing to do is to use > >check_vect there as well. > > hmm.. that means all tests need to use check_vect(). I am going offline > now for the day, however consult with Dorit and/or Janis and feel free > to update these tests appropriately for your platform. The vect tests use 'run' or 'compile' as the dg-do action based on checks in vect.exp. For powerpc and alpha this is based on a check of whether hardware support is available, via check_vmx_hw_available and check_alpha_max_hw_available. A few tests use check_vect at runtime instead because of limitations of the test directives. It sounds as if there should be a check in target-supports.exp for SSE2 support that determines whether the default test action is 'run' or 'compile' for i686 targets. Janis
Re: 4.0.1 build failure on powerpc64-linux
On Mon, Jul 18, 2005 at 12:53:01PM +0200, Karel Gardas wrote: > > I'm trying to build 4.0.1 release on powerpc64-linux, but without success > so far, since build fails with: > > I've configured it with: > ../gcc-4.0.1/configure --prefix=$HOME/usr/local/gcc-4.0.1 --enable-shared > --enable-threads --enable-languages=c++ --disable-checking > --enable-__cxa_atexit This won't work unless the default compiler and binutils generate 64-bit code by default. I build biarch compilers that default to -m32 with "--build=powerpc64-linux --host=powerpc64-linux --target=powerpc64-linux --with-cpu=default32". > Also, http://gcc.gnu.org/install/specific.html#powerpc-x-linux-gnu notes > that binutils 2.15 are required, which seems to be available on this > system (Debian 3.1/ppc64): I've been using binutils 2.16, but can't remember specific problems with earlier versions. Have you successfully built earlier versions for this target? Janis
Re: -fprofile-generate and -fprofile-use
On Wed, Jul 20, 2005 at 10:45:01AM -0700, girish vaitheeswaran wrote: > > --- Steven Bosscher <[EMAIL PROTECTED]> wrote: > > > > > On Wednesday 20 July 2005 18:53, girish vaitheeswaran wrote: > > > > I am seeing a 20% slowdown with feedback optimization. > > > > Does anyone have any thoughts on this. > > > > > > My first thought is that you should probably first > > > tell what compiler > > > you are using. > > I am using gcc 3.4.3 > -girish Which platform? I've seen slower code for profile-directed optimizations on powerpc64-linux with GCC 4.0 and mainline. It's a bug, but I haven't looked into it enough to provide a small test case for a problem report. Janis
Re: Middle-end and optimization regressions: what should we do?
On Thu, Jul 28, 2005 at 10:41:48AM -0700, Steve Kargl wrote: > On Thu, Jul 28, 2005 at 07:26:22PM +0200, Fran?ois-Xavier Coudert wrote: > > > > PR 22619 and PR 22509 are two examples of recent 4.1 regressions that > > showed up in gfortran, due to middle-end or optimization bugs (only > > happen at -O3). Since these are regressions, they should be treated > > before a long time passes, but since both source codes are Fortran, I > > guess people don't (and won't) want to look at them. > > > > How can we help here? Is there a way to make gfortran output a > > complete GIMPLE tree, that could be used for middle-end hackers to > > determine where the problem is? Or are we doomed to a dichotomy to > > know which patch caused these regressions? > > These types of regressions have essentially halted my testing > and development on gfortran because I usually try to identify > the exact ChangeLog entry associated with the problem. This > typically involves a binary search for the problem with a > bootstrap in a clean directory for each "cvs update -D ". In case you're not already aware of them, see contrib/reghunt and http://gcc.gnu.org/bugs/reghunt.html. Janis
Re: -fprofile-generate and -fprofile-use
On Thu, Sep 01, 2005 at 11:45:35PM +0200, Steven Bosscher wrote: > On Thursday 01 September 2005 23:19, girish vaitheeswaran wrote: > > Sorry I still did not follow. This is what I > > understood. During Feedback optimization apart from > > the -fprofile-generate, one needs to turn on > > -fmove-loop-invariants. > > You don't "need to". It just might help iff you are using a gcc 4.1 > based compiler. > > > However this option is not > > recognized by the gcc 3.4.4 or 3.4.3 compilers. What > > am I missing? > > You are missing that > 1) this whole thread does not concern gcc 3.4.x; and > 2) the option -fmove-loop-invariants does not exist in 3.4.x. Girish started this thread about problems he is seeing with GCC 3.4.3 (see http://gcc.gnu.org/ml/gcc/2005-07/msg00866.html). Others of us chimed in about similar issues with later versions. Suggestions for avoiding the problems have been about those later versions, not the version he is using. Janis
Re: Language Changes in Bug-fix Releases?
On Wed, Sep 07, 2005 at 09:55:29PM +0200, Richard B. Kreckel wrote: > On 7 Sep 2005, Gabriel Dos Reis wrote: > > Mike Stump <[EMAIL PROTECTED]> writes: > > | I'll echo the generalized request that we try and avoid tightenings > > | on other than x.y.0 releases. > > > > I hear you. In this specific case, it worths remembering people that > > the issue is not just an accept-invalid that was turned into > > reject-invalid, but wrong-code generation (in the sense that > > wrong-code was being genereted for *valid* program) that was fixed. > > I'm unable to find which wrong-code generation PR was fixed by reading > this thread. That applies to any of the two examples I posted. > > Anyway, as I mentioned: If this broken code was a collateral damage of a > really serious bug, then it would be foolish to complain. It's just that > I'm having difficulties imagining how accepting a friend declaration as a > forward declaration (which by the way worked since at least GCC 2.7.x) can > make your code accidentally fire that ballistic rocket. (If it really > can, then you're having a truck load of other problems besides code > quality.) In the hopes that it will help the discussion I ran regression hunts on the two test cases. The first test: struct foo { friend class bar; void screw (bar&); }; is rejected starting with this 4.0 patch: http://gcc.gnu.org/ml/gcc-cvs/2005-05/msg01007.html The second test: struct lala { void f () const {} }; struct lulu { template lulu (void (T::*)()) {} }; lulu froobrazzz(&lala::f); is rejected starting with this 4.0 patch: http://gcc.gnu.org/ml/gcc-cvs/2005-08/msg00384.html Janis
Re: GCC 4.0.2 RC2
On Sun, Sep 18, 2005 at 09:41:54AM -0700, Mark Mitchell wrote: > Please test, post test results to gcc-testresults, and send me an email > pointing at the results. OK for powerpc64-unknown-linux-gnu: http://gcc.gnu.org/ml/gcc-testresults/2005-09/msg00942.html Janis
Re: Moving to subversion, gonna eat me a lot of peaches
On Tue, Oct 04, 2005 at 08:46:20AM +1000, Ben Elliston wrote: > Daniel Berlin wrote: > > >BJE has converted most of the client side scripts in the contrib > >directory. I have to see what is left and conver the rest. > > I looked pretty carefully through every file in that directory. You should > find that it is all taken care of. I'm working on regression hunt scripts that search by patch rather than by date and hope to have those working with Subversion and ready to contribute by the time we switch over. Janis
Re: gcc 4.1 FAIL: gfortran.dg/large_integer_kind_1.f90 on sparc/sparc64 linux...
On Tue, Oct 04, 2005 at 12:43:35PM +0200, FX Coudert wrote: > >is there anything I can provide you with to have a better guess? I'm > >definately willing to debug if you direct me... > > Unfortunately, I think we need a dejagnu expert here, I have no idea how > to debug these things... > > If nobody can provide help in the next few days, please file a bug-report. > > Thanks for your help, > FX The result of check_effective_target_fortran_large_int is cached after it is first used for running tests for 64-bit code, and that result is still used for running the tests for 32-bit code when the result would be different. There's a similar problem in my tests for powerpc64-linux where the 32-bit result is used for 64-bit testing and all of those tests are maked unsupported. Janis
Re: gcc 4.1 FAIL: gfortran.dg/large_integer_kind_1.f90 on sparc/sparc64 linux...
On Tue, Oct 04, 2005 at 01:16:58PM -0700, Janis Johnson wrote: > On Tue, Oct 04, 2005 at 12:43:35PM +0200, FX Coudert wrote: > > >is there anything I can provide you with to have a better guess? I'm > > >definately willing to debug if you direct me... > > > > Unfortunately, I think we need a dejagnu expert here, I have no idea how > > to debug these things... > > > > If nobody can provide help in the next few days, please file a bug-report. > > > > Thanks for your help, > > FX > > The result of check_effective_target_fortran_large_int is cached after > it is first used for running tests for 64-bit code, and that result is > still used for running the tests for 32-bit code when the result would > be different. There's a similar problem in my tests for powerpc64-linux > where the 32-bit result is used for 64-bit testing and all of those > tests are maked unsupported. I forgot to mention that I'll fix this. Janis
Re: DejaGNU test case assistance please?
On Fri, Oct 07, 2005 at 01:32:29AM -0700, Kean Johnston wrote: > Is there a way to exclude specific line tests based on > target switches? Something like dg-skip-if? Or perhaps > thats the right think to use (but all the examples I > have seen seem to skip the entire test case). You're in luck! dg-warning and similar directives can be skipped or xfailed for particular targets, but those don't take options into account. There is, however, an effective-target keyword for fpic. The directives used in the GCC testsuite, along with effective-target keywords and target/xfail selectors, are documented at http://gcc.gnu.org/onlinedocs/gccint/Test-Directives.html. > For example, in gcc.dg/assign-warn-3.c, how would I > ignore the check for a warning if -fPIC is used? > > Any help greatly appreciated. Something like { dg-warning "regexp" { "" { target { ! fpic } } } } If it should only be ignored for fpic for particular targets you can specify that in the target selector. Janis
Re: DejaGNU test case assistance please?
On Fri, Oct 07, 2005 at 12:06:32PM -0700, Kean Johnston wrote: > >You're in luck! dg-warning and similar directives can be skipped or > >xfailed for particular targets, but those don't take options into > >account. There is, however, an effective-target keyword for fpic. > Ok I'll give that a whirl. But what if I needed to skip the test > based on some other command line option? Intuitively, I would want > to use dg-skip-if or dg-xfail-if, which provide a more generalized > approach to command line checking and doesnt rely on that special > target. Sorry, that's not supported. > >The directives used in the GCC testsuite, along with effective-target > >keywords and target/xfail selectors, are documented at > >http://gcc.gnu.org/onlinedocs/gccint/Test-Directives.html. > > I read that carefully before asking the question, and it is very > unclear (to me, a non-DG head) what the scope of some of those > directives are. For example, dh-skip-if seems to apply to the > entire test case, whereas things line dg-warning can appear on > a line-by-line basis. Or perhaps I'm just assuming that, and they > can in fact be used on a line-by-line test basis. What would make it more clear? I could add a paragraph before the list of directives saying that dg-error, dg-warning, and dg-bogus apply to a single line and that others apply to the entire test, or could list them separately, preceded by text that they apply to a single line. This stuff is confusing and it would be nice for the documentation to make it less so. > This is purely for my education's sake (the fpic target you > mentioned will suffice for the specific case I care about), but > if I wanted to, say xfail the test if -mfoo was specified on the > command line: Could I have something like: > > foo(); /* dg-warning "regexp" { ! dg-skip-if { "" { i?86-*-* } { "-mfoo" > } { "" } } */ > > The tcl syntax makes my head hurt so if thats wrong and you can > show me a generalized way to do this type of thing I would be > very grateful. Thank you Janis! No, there's no way to do that. It took lots of fighting with TCL and DejaGnu (and a tiny recursive-descent parser in TCL!) to get the target/xfail selectors to work, and it would be difficult to support the option lists for dg-warning and friends since those are defined within DejaGnu. If there's enough demand for it, though, anything is possible. Janis
Re: A couple more subversion notes
On Thu, Oct 20, 2005 at 06:15:38PM -0400, Richard Kenner wrote: > There already IS real documentation, and it's very good. > > http://svnbook.red-bean.com/ > > Actually, I just went to that site and the latest printable (i.e., PDF) > version I can find there is for version 1.1. Is that going to be good enough? I prefer real books to online docs, so I paid a bunch of money for the book _Version_Control_with_Subversion_, printed in October 2004, which is the same as, or perhaps earlier than, what's available at no cost at the URL listed above. So far it has answered all of my questions, with the exception of the networking issues that have been answered here or on IRC. The book is exceptionally well-written. In general I dislike having to move to new tools, but learning Subversion has been painless. Updating my regression hunt setup has been easy since with Subversion it is straightforward to do the operations that were difficult or error-prone with CVS, like getting sources for a branch as of a particular date or revision. Janis
regression hunt setup using Subversion
*** Warning: Your file, reghunt.20051020.tar.bz2, contains more than 32 files after decompression and cannot be scanned. *** Here's my current regression hunt setup using Subversion in case anyone would like to try it out. I plan to update contrib/reghunt. The regression hunting script in contrib/ is reg_search, which starts with two dates. The new script is reg-hunt, which starts with two identifiers that index entries in a file, with each entry also including the SVN revision number. I've used that script with CVS as well, with each file entry having information about a patch from the gcc-cvs mailing list archive. Setting up that file was pretty awful, but for Subversion it's trivial. Unlike what's currently in contrib/reghunt, the tarball has everything needed for regression hunts except an rsync copy of the repository. It includes files for several examples that I use for testing changes to my setup on powerpc64-unknown-linux-gnu and i686-pc-linux-gnu. Happy hunting! Janis reghunt.20051020.tar.bz2 Description: BZip2 compressed data
Re: regression hunt setup using Subversion
On Fri, Oct 21, 2005 at 03:14:47PM -0400, Andrew Pinski wrote: > > On Oct 21, 2005, at 3:11 PM, Joseph S. Myers wrote: > > >The use of > > > >ncpu=`grep '^processor' /proc/cpuinfo | wc -l` > > > >seems Linux-specific; this looks like it should be in gcc-svn-env as a > >default for the user to customise, rather than in bin/gcc-build-*. > > > I don't think it is Linux specific, it works on my openBSD box. It > might > work on other OS's which have /proc on it too like Solaris. I was afraid of this, which is why before I only submitted the toplevel scripts for contrib/ and let people provide their own build scripts. I'm happy to make things more portable, though, as long as people provide suggestions like this; thanks. Janis
docs for setting up new target library
The end of http://gcc.gnu.org/onlinedocs/gccint/Top-Level.html#Top-Level has a link to a separate manual that is supposed to explain how the top level build works, including building target libraries. Here's the corresponding text in sourcebuild.texi: The build system in the top level directory, including how recursion into subdirectories works and how building runtime libraries for multilibs is handled, is documented in a separate manual, included with GNU Binutils. @xref{Top, , GNU configure and build system, configure, The GNU configure and build system}, for details. The link is to http://gcc.gnu.org/onlinedocs/configure/index.html#Top, which does not exist: "The requested URL /onlinedocs/configure/index.html was not found on this server." Where is it? Will it help me figure out how to set up a new target library for GCC? Janis
Re: Post-Mont-Tremblant mailing
On Sat, Oct 29, 2005 at 11:12:07AM +, Joseph S. Myers wrote: > The Post-Mont-Tremblant WG14 mailing is now available from the WG14 > website. Particular points of note: > > * The current decimal FP draft is now N1150 (no longer N1107 which is the > version mentioned in svn.html); I don't what what's changed since N1107. I had noticed on Friday that svn.html still mentioned the old draft. There was another one, N1137, in between those two. > * The draft minutes say: > > TR 24732 - Decimal Floating Point. The committee revising IEEE 754R > has run into a significant issue in the revision. TR 24732 depends on > that document. There is nothing we can do to proceed with that TR > until the key issues are resolved. We need to tell SC22 what our plans > are for proceeding. Fred volunteered to work with Edison to develop a > rationale. > > I don't know what the issue with 754R is and how it affects dfp-branch, > but I presume the DFP developers are following 754R development and know > about and understand the issue and its impact. Yes, Ben and I have been in contact with Edison about changes and have let him know about holes in the draft that we've discovered while implementing it. N1150 and its successors can't become stable until IEEE 754R is finalized. The support we will submit for GCC 4.2 must be considered experimental; the documentation will state that clearly. Janis
Re: [libgfortran] Patch to handle statically linked libgfortran
On Sun, Oct 30, 2005 at 12:24:56PM +0100, FX Coudert wrote: > I added a test for the testsuite, conditionnal on a new effective > target. Could someone OK this part? How does the test in check_effective_target_static_libgfortran check for use of static libgfortran? Shouldn't it pass -static or something? If it's really doing it already by a means that is not apprarent, please add a comment. That proc has a comment that was copied from another proc, please fix that. You can use the proc get_compiler_messages, although I see there are others that could use it but don't. Janis
Re: non coding contributions
On Tue, Nov 01, 2005 at 12:42:54PM -0800, Benj FitzPatrick wrote: > Hi, > I'm relatively new to linux as I have only been > seriously using it for less than a year. However, I > have been following certain projects for much longer > (Transgaming.com, etc.) and was wondering if there was > a way to donate to help further gcc. I have been > giving $5 a month to transgaming for over a year now, > and I know that other projects could use extra money > as well. Would it be possible to setup something > similar for gcc? I know it wouldn't be much, and a > rough estimate of the number of contributors would > have to be estimated first, but it might be enough to > get some new hardware when needed. As far as > estimating the number of people who might contribute, > I bet that if somebody wrote up a small blurb it could > go up on slashdot. > Thanks, > Benj FitzPatrick GCC is part of the Free Software Foundation's GNU Project. To help, see https://www.fsf.org/donate and http://www.gnu.org/help/help.html. Janis
Re: [libgfortran] Patch to handle statically linked libgfortran
On Wed, Nov 02, 2005 at 11:11:42PM +0100, FX Coudert wrote: > 2005-11-02 Francois-Xavier Coudert <[EMAIL PROTECTED]> > > PR libfortran/22298 > * gcc/testsuite/lib/target-supports.exp > (check_effective_target_static_libgfortran): New > static_libgfortran effective target. > * gcc/testsuite/gfortran.dg/static_linking_1.f: New test. > * gcc/testsuite/gfortran.dg/static_linking_1.c: New file. OK, but this part goes into gcc/testsuite/ChangeLog. Janis
Re: Null pointer check elimination
On Mon, Nov 14, 2005 at 11:56:16PM +0100, Gabriel Dos Reis wrote: > "Michael N. Moran" <[EMAIL PROTECTED]> writes: > SEGFAULT is not a behaviour defined by the language. It is *just* one > form of undefined behaviour. If you execute that function, it might > reformat your harddrive and that woud be fine -- though I know of no > compiler that does that on purpose. But, the point is that your > program in erring in the outer space. > > | // dereference: access to object > | // If a is null, then SEGFAULT > | *a = 0; > > Again, that may or may not happen. Some operating systems load a page of zeroes at address zero, so the dereference of a null pointer has no visible effect; a different form of undefined behavior. DYNIX/ptx did that by default, and I think that HP-UX does it. I much prefer a segfault. Janis
Re: dfp-branch merge plans
On Wed, Nov 23, 2005 at 02:05:03PM +, Joseph S. Myers wrote: > On Wed, 23 Nov 2005, Ben Elliston wrote: > > > 3. Merge in libcpp and C (only) front-end changes. > > > > 4. Merge in middle-end changes and internals documentation. > > What have you done in the way of command-line options to enable the new > features? Decimal floating point is supported in C for -std=gnu* if GCC is configured with --enable-decimal-float. This is the default for powerpc*-*-linux* and is available for ?86*-linux*. > Specifically: > > * Decimal fp constants are already preprocessing numbers in the standard > syntax - but in standard C their conversion to a token requires a > diagnostic (at least a pedwarn with -pedantic). If decimal fp is to be > usable with -pedantic an option is needed to disable that pedwarn/error. > > * There is also an arguable case for diagnosing any use of the new > keywords if -pedantic, unless such a special option is passed. The keywords _Decimal32, _Decimal64, and _Decimal128 and decimal float constants are not recognized for -std=c* and get warnings with -pedantic. This seems reasonable until the definition of the feature is more stable. > * Any choice of TTDT other than "double" (properly, the type specified by > FLT_EVAL_METHOD, but we don't implement that) would also be incompatible > with C99 and need specifically enabling. We have not implemented translation time data types, which change the behavior of existing C programs. We'll look at supporting that part if it is still in the TR after it has been approved. We also have not implemented the precision pragma, which is currently not well-defined. > (Previous versions of the DTR - such as the one linked from svn.html - > also provided defined behavior for out-of-range conversions between binary > float and integer types. This appears to have been removed in the latest > draft, so no code to implement this is needed. However, such code would > also have needed to be conditional unless benchmarking as per bug 21360 > showed no performance impact.) Agreed. The DTR describing the feature has not yet been widely reviewed and the support via decNumber is very slow, so we see the initial support for decimal floating point as a technology preview. The DTR has gone through a number of changes since N1107 and will probably go through more changes before it is approved. Currently we support N1150 with a few exceptions. Janis
Re: How can I register gcc build status for HP-UX 11i
On Thu, Dec 22, 2005 at 10:08:14AM +0900, 김성박 wrote: > How can I register gcc build status for HP-UX 11i > > I successfully installed gcc3.4.4 & gcc 4.0.0 for hppa64-hp-hpux11.11 > but therer are no build status in http://gcc.gnu.org/gcc-3.4/buildstat.html & > http://gcc.gnu.org/gcc-4.0/buildstat.html > > And how can I make results for gcc.xxx testsuite like > http://gcc.gnu.org/ml/gcc-testresults/2004-09/msg01014.html ? > Information about submitting information for the build status lists is in http://gcc.gnu.org/install/finalinstall.html. When you send mail to gcc@gcc.gnu.org with the appropriate information I'll add a link to your mail in the status list for that release. I sometimes get behind in updating the status lists; it's OK to send me a reminder privately. Information about running the testsuite and submitting results is at http://gcc.gnu.org/install/test.html. If you include a link to the archived test results in the build status mail I'll include it in the build status list. Sometimes I take the time to link to other test results for releases, but often I get very behind on that as well. Janis
Re: Cleaning up the last g++ testsuite nit from 3.4
On Fri, Dec 23, 2005 at 02:27:41PM -0500, Kaveh R. Ghazi wrote: > > Some more info, the reason hpux only showed one XPASS in 3.4 seems to > > be that the regexp isn't correct to match the assembler syntax. > > Patches were installed on mainline but not in 3.4 for mmix and hpux: > > http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02513.html > > http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00323.html > > > > The third xfail seems to have been fixed on or about July 29th 2004: > > http://gcc.gnu.org/ml/gcc-testresults/2004-07/msg01290.html > > http://gcc.gnu.org/ml/gcc-testresults/2004-07/msg01240.html > > > > So it seems that if we backport the above patches and remove the first > > two (passing) xfails we'd be result-clean. We could remove the third > > (currently failing) xfail if we find and backport the patch that fixed > > it. > > (Sorry for the multiple emails) > > This appears to be PR 16276. I'm not sure though because the fix for > that PR appears to have been applied on mainline on Aug 12, 2004, or > two weeks after the tinfo1.C testcase started XPASSing all three checks. > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16276#c19 > > There's a patch in there for 3.4 which has already been applied to the > gcc-3_4-rhl-branch. See: > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16276#c23 > > However the original fix that was reverted in 3.4 by Andrew was also > applied to that branch: > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16276#c24 > > Jakub, can you explain why you did that? > > Thanks, > --Kaveh > > PS: I'm going to try applying the patch to 3.4 and see if it fixes > tinfo1.C. Meanwhile I'm running a regression hunt for the fix on mainline, which is currently looking between 2005-07-29 and 2005-07-30. Perhaps that's not relevant if the real fix was applied later, but at least we'll know why the section definition went away. Janis
Re: Cleaning up the last g++ testsuite nit from 3.4
On Fri, Dec 23, 2005 at 11:33:20AM -0800, Janis Johnson wrote: > > PS: I'm going to try applying the patch to 3.4 and see if it fixes > > tinfo1.C. > > Meanwhile I'm running a regression hunt for the fix on mainline, which > is currently looking between 2005-07-29 and 2005-07-30. Perhaps that's > not relevant if the real fix was applied later, but at least we'll know > why the section definition went away. The test started getting the third XPASS (for the .section definition going away) on mainline with this large patch from Mark: http://gcc.gnu.org/viewcvs?view=rev&rev=85309 r85309 | mmitchel | 2004-07-29 17:59:31 + (Thu, 29 Jul 2004) | 124 lines Janis
GCC talk at LinuxWorld
There have been complaints that the GCC community doesn't do a good job of promoting itself, so in a presumptuous attempt to remedy that I'm giving a talk at LinuxWorld Conference and Expo in Boston in April on "Recent Developments in GCC". It doesn't require a paper, just a slide presentation. I'm planning to stick to high-level information: the scope of the product, an overview of the community, what's new in the last few releases and what might be coming in future releases. I expect to send questions to some of you about information I can't find elsewhere. I would welcome offers to review the slides, which must be ready by early February. Janis
Re: Problem with gfortran or did I messsed up GMP installation?
On Fri, Jan 13, 2006 at 08:56:13PM +0100, Eric Botcazou wrote: > > OK. But what if I want sparc-sun-solaris2.* compiler, and later > > want to compile some 64-bit app that links with GMP too (or the other > > way around)? I should be able to have both libs on system where > > multilib is supported option (such as sparc*-sun-solaris*). > > The GMP developers are probably the best persons to ask about that. GMP is used by the compiler, not by the application, so you only need the version that the compiler will use. Janis
Re: Example of debugging GCC with toplevel bootstrap
On Fri, Jan 13, 2006 at 05:15:40PM -0500, Jason Merrill wrote: > Paolo Bonzini wrote: > > > >>So, how would I now get a cc1plus/f951/jc1/cc1 binary compiled by the > >>stage0 (host) compiler? > > >make stage1-bubble STAGE1_LANGUAGES=c,c++,fortran,java > > Wow, that's awkward. > > >I think that after I fix PR25670, as a side effect, you will also be > >able to use the more intuitive target "all-stage1". But I didn't think > >of that PR much closely because it is about a target that was anyway > >undocumented, and there are bigger fish to fry. > > Remind me why it's a good idea to force me to mess with bootstrapping at > all, when all I want is to build a copy of the compiler that I can use > for debugging problems? There has to be an easier way to do that. My > laptop builds stage1 reasonably fast, but a bootstrap takes several hours. > > This is a serious regression for me. If all you want is cc1plus/f951/cc1, here's what I use for regression hunts (I haven't tried it with jc1). It works for cross compilers, too. #! /bin/sh # This doesn't work earlier than about 2003-02-25. ID="${1}" LOGDIR=${REG_BUILDDIR}/logs/${BUGID}/${ID} mkdir -p $LOGDIR msg() { echo "`date` ${1}" } abort() { msg "${1}" exit 1 } msg "building $REG_COMPILER for id $ID" rm -rf $REG_OBJDIR mkdir $REG_OBJDIR cd $REG_OBJDIR #msg "configure" ${REG_GCCSRC}/configure \ --prefix=$REG_PREFIX \ --enable-languages=$REG_LANGS \ $REG_CONFOPTS \ > configure.log 2>&1 || abort " configure failed" #msg "make libraries" make all-build-libiberty > ${LOGDIR}/make.all-build-libiberty.log 2>&1 || true make all-libcpp > ${LOGDIR}/make.all-libcpp.log 2>&1 || true make all-libdecnumber > ${LOGDIR}/make.all-libdecnumber.log 2>&1 || true make all-intl > ${LOGDIR}/make.all-intl.log 2>&1 || true make all-libbanshee > ${LOGDIR}/make.all-libbanshee.log 2>&1 || true make configure-gcc > ${LOGDIR}/make.configure-gcc.log 2>&1 || true # hack for 3.3 branch if [ ! -f libiberty/libiberty.a ]; then mkdir -p libiberty cd libiberty ln -s ../build-${REG_BLD}/libiberty/libiberty.a . cd .. fi cd gcc # REG_COMPILER is cc1, cc1plus, or f951 #msg "make $REG_COMPILER" make $REG_MAKE_J $REG_COMPILER > ${LOGDIR}/make.${REG_COMPILER}.log 2>&1 \ || abort " make failed" msg "build completed" exit 0
dfp tests fail for powerpc*-linux*, patch in the works
Currently on trunk, decimal float is configured by default for powerpc*-*-linux*. The testsuite check to decide whether to run the dfp tests checks whether sample code compiles, not whether it also links and runs. The runtime support isn't yet in although Ben Elliston submitted it quite a while ago (ping!). Therefore there are lots of dfp test failures showing up for powerpc*-*-linux* targets. I'm working on testsuite changes to allow running the dfp tests, including those in directories other than gcc.dg/dfp, compile-only if decimal float support is in the compiler but not in the runtime. The patch is not quite ready yet. In the meantime if the failures are bothering anyone, configure with --disable-decimal-float. Janis
Re: adding an argument for test execution in testsuite
On 05/04/2011 11:21 AM, Nenad Vukicevic wrote: > It seems that I fixed my problem by defining remote_spawn > procedure (and fixing the order of loading libraries :) ) in my > own upc-dg.exp file and adding a line to it that append > additional arguments to the command line: "append commandline > $upc_run_arguments". > > global $upc_run_arguments is getting set before dg-test is being > called. I used a simple string compare to see if dynamic > threads are required. So far it works as expected. Working "so far" shouldn't be good enough, especially if your test will be run for a variety of targets. Presumably you don't really need the number of threads to be specified on the command line, you just need for it to look as if it were specified at run time. You could, for example, define it in a second source file included in the test via dg-additional-sources and use it from a global variable or call a function to get it. Janis
Re: --enable-build-with-cxx vs -Werror=conversion-null
On 05/04/2011 06:13 PM, Jack Howarth wrote: >Currently the bootstrap with --enable-build-with-cxx is failing because of > the following warnings treated as errors... > > /sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/./prev-gcc/g++ > -B/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/./prev-gcc/ > -B/sw/lib/gcc4.7/x86_64-apple-darwin10.7.0/bin/ -nostdinc++ > -B/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/prev-x86_64-apple-darwin10.7.0/libstdc++-v3/src/.libs > > -I/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/prev-x86_64-apple-darwin10.7.0/libstdc++-v3/include/x86_64-apple-darwin10.7.0 > > -I/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/prev-x86_64-apple-darwin10.7.0/libstdc++-v3/include > > -I/sw/src/fink.build/gcc47-4.7.0-1000/gcc-4.7-20110504/libstdc++-v3/libsupc++ > -L/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/prev-x86_64-apple-darwin10.7.0/libstdc++-v3/src/.libs > -c -g -O2 -mdynamic-no-pic -flto=jobserver -frandom-seed=1 > -fprofile-generate -fno-lto -DIN_GCC -W -Wall -Wwrite-strings -Wcast-qual > -Wmissing-format-attribute -pedantic -Wno-long-long -Wno-variadic-macros > -Wno-overlength-strings -Werror -fno-common -DHAVE_CONFIG_H -I. - I. -I../../gcc-4.7-20110504/gcc -I../../gcc-4.7-20110504/gcc/. -I../../gcc-4.7-20110504/gcc/../include -I../../gcc-4.7-20110504/gcc/../libcpp/include -I/sw/include -I/sw/include -I../../gcc-4.7-20110504/gcc/../libdecnumber -I../../gcc-4.7-20110504/gcc/../libdecnumber/dpd -I../libdecnumber -I/sw/include -I/sw/include -DCLOOG_INT_GMP -DCLOOG_ORG -I/sw/include ../../gcc-4.7-20110504/gcc/varpool.c -o varpool.o > ../../gcc-4.7-20110504/gcc/tree-inline.c: In function 'tree_node* > maybe_inline_call_in_expr(tree)': > ../../gcc-4.7-20110504/gcc/tree-inline.c:5241:40: error: converting 'false' > to pointer type 'void (*)(tree)' [-Werror=conversion-null] > cc1plus: all warnings being treated as errors > > /sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/./prev-gcc/g++ > -B/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/./prev-gcc/ > -B/sw/lib/gcc4.7/x86_64-apple-darwin10.7.0/bin/ -nostdinc++ > -B/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/prev-x86_64-apple-darwin10.7.0/libstdc++-v3/src/.libs > > -I/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/prev-x86_64-apple-darwin10.7.0/libstdc++-v3/include/x86_64-apple-darwin10.7.0 > > -I/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/prev-x86_64-apple-darwin10.7.0/libstdc++-v3/include > > -I/sw/src/fink.build/gcc47-4.7.0-1000/gcc-4.7-20110504/libstdc++-v3/libsupc++ > -L/sw/src/fink.build/gcc47-4.7.0-1000/darwin_objdir/prev-x86_64-apple-darwin10.7.0/libstdc++-v3/src/.libs > -c -g -O2 -mdynamic-no-pic -flto=jobserver -frandom-seed=1 > -fprofile-generate -fno-lto -DIN_GCC -W -Wall -Wwrite-strings -Wcast-qual > -Wmissing-format-attribute -pedantic -Wno-long-long -Wno-variadic-macros > -Wno-overlength-strings -Werror -fno-common -DHAVE_CONFIG_H -I. - I. -I../../gcc-4.7-20110504/gcc -I../../gcc-4.7-20110504/gcc/. -I../../gcc-4.7-20110504/gcc/../include -I../../gcc-4.7-20110504/gcc/../libcpp/include -I/sw/include -I/sw/include -I../../gcc-4.7-20110504/gcc/../libdecnumber -I../../gcc-4.7-20110504/gcc/../libdecnumber/dpd -I../libdecnumber -I/sw/include -I/sw/include -DCLOOG_INT_GMP -DCLOOG_ORG -I/sw/include ../../gcc-4.7-20110504/gcc/tree-inline.c -o tree-inline.o > ../../gcc-4.7-20110504/gcc/varpool.c: In function 'varpool_node* > varpool_extra_name_alias(tree, tree)': > ../../gcc-4.7-20110504/gcc/varpool.c:679:10: error: converting 'false' to > pointer type 'varpool_node*' [-Werror=conversion-null] > cc1plus: all warnings being treated as errors > > Is there a simple fix to suppress these warnings? > Jack Fix those assignments to use "NULL" instead of "false". Janis
Re: --enable-build-with-cxx vs -Werror=conversion-null
On 05/04/2011 07:03 PM, Gabriel Dos Reis wrote: > On Wed, May 4, 2011 at 8:32 PM, Janis Johnson > wrote: > >> Fix those assignments to use "NULL" instead of "false". > > Hi Janis, > > is there a way your regression tester could automatically build with a > C++ compiler. > This is the second breakage in about a week... I'll look into that, it sounds like a good idea. Janis
Re: Disabling Secondary Tests
On 06/03/2011 11:14 AM, Lawrence Crowl wrote: > The PPH project has tests that compile two different ways, and > then compare the assembly. If either of the compiles fails, the > comparison will fail. We'd like to simply not run the comparison. > > We currently have: > > set have_errs [llength [grep $test "{\[ \t\]\+dg-error\[\t\]\+.*\[ \t\]\+}"]] > # Compile the file the first time for a base case. > dg-test -keep-output $test "$options -I." "" > > if { $have_errs } { >verbose -log "regular compilation failed" >fail "$nshort $options, regular compilation failed" >return > } > > But that only stops subsequent actions when the test is known > a priori to have errors. How do we detect compilation errors, > so as to skip the remainder of the actions? Complicated GCC tests do this by using local procs instead of dg-runtest and dg-test. See, for example, gcc.dg/lto/lto.exp, gcc.dg/compat/compat.exp and gcc.dg/tree-prof/tree-prof.exp, which use lto.exp, compat.exp and profopt.exp from GCC's testsuite/lib. Those have scenarios in which further testing is skipped after a compile or link fails. Janis
Re: Disabling Secondary Tests
On 06/08/2011 01:54 PM, Lawrence Crowl wrote: > On 6/6/11, Janis Johnson wrote: >> On 06/03/2011 11:14 AM, Lawrence Crowl wrote: >>> The PPH project has tests that compile two different ways, and >>> then compare the assembly. If either of the compiles fails, the >>> comparison will fail. We'd like to simply not run the comparison. >>> >>> We currently have: >>> >>> set have_errs [llength [grep $test "{\[ \t\]\+dg-error\[\t\]\+.*\[ >>> \t\]\+}"]] >>> # Compile the file the first time for a base case. >>> dg-test -keep-output $test "$options -I." "" >>> >>> if { $have_errs } { >>>verbose -log "regular compilation failed" >>>fail "$nshort $options, regular compilation failed" >>>return >>> } >>> >>> But that only stops subsequent actions when the test is known >>> a priori to have errors. How do we detect compilation errors, >>> so as to skip the remainder of the actions? >> >> Complicated GCC tests do this by using local procs instead of dg-runtest >> and dg-test. See, for example, gcc.dg/lto/lto.exp, >> gcc.dg/compat/compat.exp and gcc.dg/tree-prof/tree-prof.exp, which use >> lto.exp, compat.exp and profopt.exp from GCC's testsuite/lib. Those >> have scenarios in which further testing is skipped after a compile or >> link fails. > > So, I ended up changing the definition of fail from "reports test > failing" to "does not produce an assembly file". We really need > the latter for comparison, so it is the true measure. Once I made > that change in orientation, I was able to achieve what I wanted. > > The simple part is the regular compile. > > # Compile the file the first time for a base case. > set dg-do-what-default compile > dg-test -keep-output $test "$options -I." "" > > # Quit if it did not compile successfully. > if { ![file_on_host exists "$bname.s"] } { > # All regular compiles should pass. > fail "$nshort $options (regular assembly missing)" > return > } Don't use dg-test, use a new variant of it as is done with the lto, compat, and profopt tests. You'll have much more control that way and can better check the success of individual steps to decide what to do next. > The complicated part is the compile we are comparing against, which > required knowing whether or not a compile failure is expected. For > that we grep for dg-xfail-if and the appropriate option. > > # Compile a second time using the pph files. > dg-test -keep-output $test "$options $mapflag -I." "" > > # Quit if it did not compile successfully. > if { ![file_on_host exists "$bname.s"] } { > # Expect assembly to be missing when the compile is an expected fail. > if { ![llength [grep $test "dg-xfail-if.*-fpph-map"]] } { > fail "$nshort $options (pph assembly missing)" > } > return > } > Relying on the existence of dg-xfail-if won't work when an expected compilation failure starts working, or an unexpected one arises. Janis
Re: Disabling Secondary Tests
On 06/09/2011 01:30 PM, Lawrence Crowl wrote: > On 6/9/11, Janis Johnson wrote: >> On 06/08/2011 01:54 PM, Lawrence Crowl wrote: >>> On 6/6/11, Janis Johnson wrote: >>>> On 06/03/2011 11:14 AM, Lawrence Crowl wrote: >>>>> The PPH project has tests that compile two different ways, and >>>>> then compare the assembly. If either of the compiles fails, the >>>>> comparison will fail. We'd like to simply not run the comparison. >>>>> >>>>> We currently have: >>>>> >>>>> set have_errs [llength [grep $test "{\[ \t\]\+dg-error\[\t\]\+.*\[ >>>>> \t\]\+}"]] >>>>> # Compile the file the first time for a base case. >>>>> dg-test -keep-output $test "$options -I." "" >>>>> >>>>> if { $have_errs } { >>>>>verbose -log "regular compilation failed" >>>>>fail "$nshort $options, regular compilation failed" >>>>>return >>>>> } >>>>> >>>>> But that only stops subsequent actions when the test is known >>>>> a priori to have errors. How do we detect compilation errors, >>>>> so as to skip the remainder of the actions? >>>> >>>> Complicated GCC tests do this by using local procs instead of dg-runtest >>>> and dg-test. See, for example, gcc.dg/lto/lto.exp, >>>> gcc.dg/compat/compat.exp and gcc.dg/tree-prof/tree-prof.exp, which use >>>> lto.exp, compat.exp and profopt.exp from GCC's testsuite/lib. Those >>>> have scenarios in which further testing is skipped after a compile or >>>> link fails. >>> >>> So, I ended up changing the definition of fail from "reports test >>> failing" to "does not produce an assembly file". We really need >>> the latter for comparison, so it is the true measure. Once I made >>> that change in orientation, I was able to achieve what I wanted. >>> >>> The simple part is the regular compile. >>> >>> # Compile the file the first time for a base case. >>> set dg-do-what-default compile >>> dg-test -keep-output $test "$options -I." "" >>> >>> # Quit if it did not compile successfully. >>> if { ![file_on_host exists "$bname.s"] } { >>> # All regular compiles should pass. >>> fail "$nshort $options (regular assembly missing)" >>> return >>> } >> >> Don't use dg-test, use a new variant of it as is done with the lto, >> compat, and profopt tests. You'll have much more control that way >> and can better check the success of individual steps to decide what >> to do next. > > I am having trouble identifying the variant. Does it have a name, > or is it inline code? I meant that you should write one. For example, gcc.dg/lto/lto.exp uses lto_execute which is defined in lib/lto.exp, and gcc.dg/compat/compat.exp uses compat-execute defined in lib/compat.exp. >>> The complicated part is the compile we are comparing against, which >>> required knowing whether or not a compile failure is expected. For >>> that we grep for dg-xfail-if and the appropriate option. >>> >>> # Compile a second time using the pph files. >>> dg-test -keep-output $test "$options $mapflag -I." "" >>> >>> # Quit if it did not compile successfully. >>> if { ![file_on_host exists "$bname.s"] } { >>> # Expect assembly to be missing when the compile is an expected >>> fail. >>> if { ![llength [grep $test "dg-xfail-if.*-fpph-map"]] } { >>> fail "$nshort $options (pph assembly missing)" >>> } >>> return >>> } >> >> Relying on the existence of dg-xfail-if won't work when an expected >> compilation failure starts working, or an unexpected one arises. > > If the compilation starts working, I get an assembly file, and > continue to assembly comparisons. If the compilation fails, > but with a different error, then the other (typically dg-bogus) > directives should report the unexpected failure. In either case, > I think, I get proper notice. Am I missing something? I don't fully understand the purpose of grepping for dg-xfail-if. You shouldn't care about that within the test, just whether the compile succeeded or not. If there are messages that prevent the compile from succeeding, then they should be handled with appropriate test directives. Apart from that, whether or not a dg-xfail-if directives takes effect depends on the target and options used for the test. Just in case you're not aware of it (many people aren't), the test directives are documented in http://gcc.gnu.org/onlinedocs/gccint/Directives.html#Directive. Janis
Re: Can't run gfortran testsuite
On Sun, 2009-07-12 at 15:40 -0400, NightStrike wrote: > On Fri, Jul 10, 2009 at 12:14 PM, NightStrike wrote: > > On Thu, Jul 9, 2009 at 2:52 PM, Steve > > Kargl wrote: > >> On Thu, Jul 09, 2009 at 12:34:00PM -0400, NightStrike wrote: > >>> I have been trying to run the gfortran testsuite for a while now, and > >>> it keeps falling apart. Dominiq tried to find a revision that might > >>> attribute to it, and though r147421 might have something to do with > >>> it: http://gcc.gnu.org/viewcvs?view=rev&revision=147421 > >>> > >>> These are the errors I get that prevent the testsuite from running > >>> more than a few thousand tests: > >>> > >>> ERROR: tcl error sourcing > >>> /dev/shm/build/gcc-svn/gcc/gcc/testsuite/gfortran.dg/dg.exp. > >>> ERROR: can't read "status": no such variable Does this help? Index: gcc/testsuite/lib/gcc-dg.exp === --- gcc/testsuite/lib/gcc-dg.exp(revision 149420) +++ gcc/testsuite/lib/gcc-dg.exp(working copy) @@ -205,9 +205,12 @@ global shouldfail set result [eval [list saved_${tool}_load $program] $args] if { $shouldfail != 0 } { + set status "unresolved" switch [lindex $result 0] { "pass" { set status "fail" } "fail" { set status "pass" } + "xpass" { set status "xfail" } + "xfail" { set status "xpass" } } set result [list $status [lindex $result 1]] }
Re: decimal float support for C++
On Tue, 2009-07-14 at 17:16 +0200, Jason Merrill wrote: > On 07/09/2009 12:32 AM, Janis Johnson wrote: > > Given that libstdc++ is used with compilers other than G++, is it > > allowable to depend on non-standard C++ compiler support? > > Seems reasonable to me, but we may want to standardize the support in > the ABI. What's the forum for discussions about the C++ ABI? Janis
Re: c-c++-common testsuite
On Fri, 2009-08-07 at 00:06 +0200, Manuel López-Ibáñez wrote: > Often I want to test the exactly same testcase in C and C++, so I find > myself adding duplicate tests under gcc.dg/ and g++.dg/. Would it be > possible to have a shared testsuite dir that is run for both C and C++ > languages? (possibly with different default configurations, like > adding -Wc++-compat to the commandline for C runs). I've been thinking about that lately, it would be useful for several kinds of functionality. We'd want effective targets for the language for using different options and for providing different error/warning checks for each language. I haven't looked into how to handle it with DejaGnu, maybe something like gcc.shared and a [symbolic] link to it called g++.shared; do links work with Subversion? Janis
Re: c-c++-common testsuite
On Fri, 2009-08-07 at 15:48 +0200, Manuel López-Ibáñez wrote: > Janis, it would be extremely useful to have dg-options that are only > enabled for certain languages, so I can do > > /* { dg-options "-std=c99" { dg-require-effective-target c } } */ > /* { dg-options "" { dg-require-effective-target c++ } } */ > > Would this be hard to implement? Any ideas? This seems to work fine (added to target-supports.exp), except that you'd use them in your example as just "c" and "c++". # Return 1 if the language for the compiler under test is C. proc check_effective_target_c { } { global tool if [string match $tool "gcc"] { return 1 } return 0 } # Return 1 if the language for the compiler under test is C++. proc check_effective_target_c++ { } { global tool if [string match $tool "g++"] { return 1 } return 0 }
Re: error in hash.cc
On Wed, 2009-08-12 at 14:52 +0300, Revital1 Eres wrote: > Hello, > > I get the following error while compiling gcc -r150679 on ppc I get the same failure for powerpc64-linux. It starts with r150641 from Benjamin Kosnik. Janis > > > unknown-linux-gnu/libstdc++-v3/include > -I/home/eres/mainline_45/gcc/libstdc++-v3/libsupc++ -fno-implicit-templates > -Wall -Wextra -Wwrite-strings -Wcast-qual > -fdiagnostics-show-location=once -ffunction-sections -fdata-sections -g > -O2 -D_GNU_SOURCE -mlong-double-64 -c > ../../../../gcc/libstdc++-v3/src/compatibility-ldbl.cc -fPIC -DPIC > -o .libs/compatibility-ldbl.o > > In file included from > ../../../../gcc/libstdc++-v3/src/compatibility-ldbl.cc:71:0: > > ../../../../gcc/libstdc++-v3/src/hash.cc:29:9: error: expected > initializer before ג<ג token > ../../../../gcc/libstdc++-v3/src/compatibility-ldbl.cc:76:17: > error: גvoid _ZNKSt4hashIeEclEe()ג aliased to > undefined symbol ג_ZNKSt3tr14hashIeEclEeג > > make[4]: *** [compatibility-ldbl.lo] Error 1 > > make[4]: Leaving directory > `/home/eres/mainline_45/new_build/powerpc64-unknown-linux-gnu/libstdc++-v3/src' > > make[3]: *** [all-recursive] Error 1 > > make[3]: Leaving directory > `/home/eres/mainline_45/new_build/powerpc64-unknown-linux-gnu/libstdc++-v3' > > make[2]: *** [all] Error 2 > > make[2]: Leaving directory > `/home/eres/mainline_45/new_build/powerpc64-unknown-linux-gnu/libstdc++-v3' > > make[1]: *** [all-target-libstdc++-v3] Error 2 > > make[1]: Leaving directory `/home/eres/mainline_45/new_build' > > make: *** [all] Error 2 > > > >
Re: error in hash.cc
On Wed, 2009-08-12 at 15:09 -0700, Benjamin Kosnik wrote: > > I get the same failure for powerpc64-linux. It starts with r150641 > > from Benjamin Kosnik. > > Should be fixed in r150707 It fails in the same way. Janis
Re: MPC 0.7 officially released, please test and report your results!
On Thu, 2009-09-10 at 18:06 -0400, Kaveh R. GHAZI wrote: > Hi, > > mpc-0.7 now has been released, you can get the package here: > http://www.multiprecision.org/index.php?prog=mpc&page=download > > Here's the official announcement: > http://lists.gforge.inria.fr/pipermail/mpc-discuss/2009-September/000554.html > > Of particular interest in this release are bugfixes, especially for > complex division, and the introduction of mpc_pow used for folding > cpow{,f,l} inside GCC. > > Note the complex "arc" functions are still missing and are now projected > to be available in a future release, probably 0.8. > > Please download, compile and run "make check" for this release and post > your results as well your target triplet and the versions of your > compiler, gmp and mpfr. All platform results are welcome, but I am > especially interested in GCC's primary and secondary platform list. I tested using both -m32 and -m64 versions of the libraries. powerpc64-unknown-linux-gnu gcc-4.4.1 gmp-4.2.4 mpfr-2.4.1 mpc-0.7 For both versions of the library: === All 45 tests passed ===
Re: Is Non-Blocking cache supported in GCC?
On Thu, 2009-09-17 at 21:48 -0700, Ian Lance Taylor wrote: > "Amker.Cheng" writes: > > > Recently I found two relative old papers about non-blocking cache, > > etc. which are : > > > > 1) Reducing memory latency via non-blocking and prefetching > > caches. BY Tien-Fu Chen and Jean-Loup Baer. > > 2) Data Prefetching:A Cost/Performance Analysis BY Chris Metcalf > > > > It seems the hardware facility does have the potential to improve the > > performance with > > compiler's assistance(especially instruction scheduling). while on the > > other hand, lifting ahead > > load instructions may resulting in increasing register pressure. > > > > So I'm thinking : > > 1, Has anyone from gcc folks done any investigation on this topic yet, > > or any statistic data based on gcc available? > > 2, Does GCC(in any release version) supports it in any targets(such as > > mips 24ke) with this hardware feature? > > If not currently, does it possible to support it by using target > > definition macros and functions? > > gcc is able to generate prefetches in loops, via the > -fprefetch-loop-arrays option. There are various related parameters, > prefetch-latency, l1-cache-line-size, etc. I don't know how well this > works. To the extent that it does work, it is supported in the MIPS > backend, and should work on the MIPS 24ke. There's also a prefetch built-in function; see http://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html#Other-Builtins It's been in GCC since 3.1. Janis
C++ support for decimal floating point
I've been implementing ISO/IEC TR 24733, "an extension for the programming language C++ to support decimal floating-point arithmetic", in GCC. It might be ready as an experimental feature for 4.5, but I would particularly like to get in the compiler changes that are needed for it. Most of the support for the TR is in new header files in libstdc++ that depend on compiler support for decimal float scalar types. Most of that compiler functionality was already available in G++ via mode attributes. I've made a couple of small fixes and have a couple more to submit, and when those are in I'll starting running dfp tests for C++ as well as C. The suitable tests have already been moved from gcc.dg to c-c++-common. In order to provide interoperability with C, people on the C++ ABI mailing list suggested that a C++ compiler should recognize the new decimal classes defined in the TR and pass arguments of those types the same as scalar decimal float types for a particular target. I had this working in an ugly way using a langhook, but that broke with LTO. I'm looking for the right places to record that an argument or return value should be passed as if it were a different type, but could use some advice about that. Changes that are needed to the compiler: - mangling of decimal float types as specified by the C++ ABI (http://gcc.gnu.org/ml/gcc-patches/2009-09/msg01079.html) - don't "promote" decimal32 arguments to double for C++ varargs calls (trivial patch not yet submitted, waiting for mangling patch so tests will pass) - pass std::decimal::decimal32/64/128 arguments and return values the same as scalar decimal float values; as I said, this worked before the LTO merge but depended on using language-specific data way too late in the compilation, so now I'm starting from scratch For the library support I have all of the functionality of section 3.1 of the TR implemented, with the exception of formatted input and output. More importantly, I have a comprehensive set of tests for that functionality that will be useful even if the implementation changes. There are several issues with the library support that I'll cover in later mail; first I'm concentrating on the compiler changes. This message is just a heads-up that I'm working on this and would greatly appreciate some advice about the argument-passing support. Janis
Re: C++ support for decimal floating point
On Wed, 2009-09-23 at 10:29 +0200, Richard Guenther wrote: > On Wed, Sep 23, 2009 at 2:38 AM, Janis Johnson wrote: > > I've been implementing ISO/IEC TR 24733, "an extension for the > > programming language C++ to support decimal floating-point arithmetic", > > in GCC. It might be ready as an experimental feature for 4.5, but I > > would particularly like to get in the compiler changes that are needed > > for it. > > > > Most of the support for the TR is in new header files in libstdc++ that > > depend on compiler support for decimal float scalar types. Most of that > > compiler functionality was already available in G++ via mode attributes. > > I've made a couple of small fixes and have a couple more to submit, and > > when those are in I'll starting running dfp tests for C++ as well as C. > > The suitable tests have already been moved from gcc.dg to c-c++-common. > > > > In order to provide interoperability with C, people on the C++ ABI > > mailing list suggested that a C++ compiler should recognize the new > > decimal classes defined in the TR and pass arguments of those types the > > same as scalar decimal float types for a particular target. I had this > > working in an ugly way using a langhook, but that broke with LTO. I'm > > looking for the right places to record that an argument or return value > > should be passed as if it were a different type, but could use some > > advice about that. > > How do we (do we?) handle std::complex<> there? My first shot would > be to make sure the aggregate type has the proper mode, but I guess > most target ABIs would already pass them in registers, no? std::complex<> is not interoperable with GCC's complex extension, which is generally viewed as "unfortunate". The class types for std::decimal::decimal32 and friends do have the proper modes. I suppose I could special-case aggregates of those modes but the plan was to pass these particular classes (and typedefs of them) the same as scalars, rather than _any_ class with those modes. I'll bring this up again on the C++ ABI mailing list. Perhaps most target ABIs pass single-member aggregates using the mode of the aggregate, but not all. In particular, not the 32-bit ELF ABI for Power. Janis
Re: C++ support for decimal floating point
On Wed, 2009-09-23 at 16:27 -0500, Gabriel Dos Reis wrote: > On Wed, Sep 23, 2009 at 4:11 PM, Janis Johnson wrote: > > On Wed, 2009-09-23 at 10:29 +0200, Richard Guenther wrote: > >> On Wed, Sep 23, 2009 at 2:38 AM, Janis Johnson wrote: > >> > I've been implementing ISO/IEC TR 24733, "an extension for the > >> > programming language C++ to support decimal floating-point arithmetic", > >> > in GCC. It might be ready as an experimental feature for 4.5, but I > >> > would particularly like to get in the compiler changes that are needed > >> > for it. > >> > > >> > Most of the support for the TR is in new header files in libstdc++ that > >> > depend on compiler support for decimal float scalar types. Most of that > >> > compiler functionality was already available in G++ via mode attributes. > >> > I've made a couple of small fixes and have a couple more to submit, and > >> > when those are in I'll starting running dfp tests for C++ as well as C. > >> > The suitable tests have already been moved from gcc.dg to c-c++-common. > >> > > >> > In order to provide interoperability with C, people on the C++ ABI > >> > mailing list suggested that a C++ compiler should recognize the new > >> > decimal classes defined in the TR and pass arguments of those types the > >> > same as scalar decimal float types for a particular target. I had this > >> > working in an ugly way using a langhook, but that broke with LTO. I'm > >> > looking for the right places to record that an argument or return value > >> > should be passed as if it were a different type, but could use some > >> > advice about that. > >> > >> How do we (do we?) handle std::complex<> there? My first shot would > >> be to make sure the aggregate type has the proper mode, but I guess > >> most target ABIs would already pass them in registers, no? > > > > std::complex<> is not interoperable with GCC's complex extension, which > > is generally viewed as "unfortunate". > > Could you expand on why std::complex<> is not interoperable with GCC's > complex extension. The reason is that I would like to know better where > the incompatibilities come from -- I've tried to remove any. I was just repeating what I had heard from C++ experts. On powerpc-linux they are currently passed and mangled differently. > > The class types for std::decimal::decimal32 and friends do have the > > proper modes. I suppose I could special-case aggregates of those modes > > but the plan was to pass these particular classes (and typedefs of > > them) the same as scalars, rather than _any_ class with those modes. > > I'll bring this up again on the C++ ABI mailing list. > > I introduced the notion of 'literal types' in C++0x precisely so that > compilers can pretend that user-defined types are like builtin types > and provide appropriate support. decimal types are literal types. So > are std::complex for T = builtin arithmetic types. I'm looking at these now. > > Perhaps most target ABIs pass single-member aggregates using the > > mode of the aggregate, but not all. In particular, not the 32-bit > > ELF ABI for Power. > > > > Janis > >
Re: C++ support for decimal floating point
On Wed, 2009-09-23 at 14:21 -0700, Richard Henderson wrote: > On 09/23/2009 02:11 PM, Janis Johnson wrote: > > The class types for std::decimal::decimal32 and friends do have the > > proper modes. I suppose I could special-case aggregates of those modes > > but the plan was to pass these particular classes (and typedefs of > > them) the same as scalars, rather than _any_ class with those modes. > > I'll bring this up again on the C++ ABI mailing list. > > You could special-case this in the C++ conversion to generic > by having the std::decimal classes decompose to scalars immediately. I've been trying to find a place in the C++ front end where I can replace all references to the class type to the scalar types, but haven't yet found it. Any suggestions? Janis
Re: C++ support for decimal floating point
On Wed, 2009-09-23 at 18:39 -0500, Gabriel Dos Reis wrote: > On Wed, Sep 23, 2009 at 6:23 PM, Janis Johnson wrote: > > On Wed, 2009-09-23 at 16:27 -0500, Gabriel Dos Reis wrote: > >> On Wed, Sep 23, 2009 at 4:11 PM, Janis Johnson wrote: > >> > On Wed, 2009-09-23 at 10:29 +0200, Richard Guenther wrote: > >> >> On Wed, Sep 23, 2009 at 2:38 AM, Janis Johnson > >> >> wrote: > >> >> > I've been implementing ISO/IEC TR 24733, "an extension for the > >> >> > programming language C++ to support decimal floating-point > >> >> > arithmetic", > >> >> > in GCC. It might be ready as an experimental feature for 4.5, but I > >> >> > would particularly like to get in the compiler changes that are needed > >> >> > for it. > >> >> > > >> >> > Most of the support for the TR is in new header files in libstdc++ > >> >> > that > >> >> > depend on compiler support for decimal float scalar types. Most of > >> >> > that > >> >> > compiler functionality was already available in G++ via mode > >> >> > attributes. > >> >> > I've made a couple of small fixes and have a couple more to submit, > >> >> > and > >> >> > when those are in I'll starting running dfp tests for C++ as well as > >> >> > C. > >> >> > The suitable tests have already been moved from gcc.dg to > >> >> > c-c++-common. > >> >> > > >> >> > In order to provide interoperability with C, people on the C++ ABI > >> >> > mailing list suggested that a C++ compiler should recognize the new > >> >> > decimal classes defined in the TR and pass arguments of those types > >> >> > the > >> >> > same as scalar decimal float types for a particular target. I had > >> >> > this > >> >> > working in an ugly way using a langhook, but that broke with LTO. I'm > >> >> > looking for the right places to record that an argument or return > >> >> > value > >> >> > should be passed as if it were a different type, but could use some > >> >> > advice about that. > >> >> > >> >> How do we (do we?) handle std::complex<> there? My first shot would > >> >> be to make sure the aggregate type has the proper mode, but I guess > >> >> most target ABIs would already pass them in registers, no? > >> > > >> > std::complex<> is not interoperable with GCC's complex extension, which > >> > is generally viewed as "unfortunate". > >> > >> Could you expand on why std::complex<> is not interoperable with GCC's > >> complex extension. The reason is that I would like to know better where > >> the incompatibilities come from -- I've tried to remove any. > > > > I was just repeating what I had heard from C++ experts. On > > powerpc-linux they are currently passed and mangled differently. > > I've been careful not to define a copy constructor or a destructor > for the specializations of std::complex so that they get treated as PODs, > with the hope that the compiler will do the right thing. At least on > my x86-64 box > running openSUSE, I don't see a difference. I've also left the > copy-n-assignment operator at the discretion of the compiler > > // The compiler knows how to do this efficiently > // complex& operator=(const complex&); > > So, if there is any difference on powerpc-*-linux, then that should be blamed > on > poor ABI choice than anything else intrinsic to std::complex (or C++). > Where possible, we should look into how to fix that. > > In many ways, it is assumed that std::complex is isomorphic to the > GNU extension. The PowerPC 32-bit ELF ABI says that a struct is passed as a pointer to an object or a copy of the object. Classes are treated the same as classes. Does the C++ ABI have rules about classes like std::complex that would cause them to be treated differently? Janis
Re: C++ support for decimal floating point
On Tue, 2009-09-29 at 13:37 -0700, Richard Henderson wrote: > On 09/29/2009 01:20 PM, Janis Johnson wrote: > > I've been trying to find a place in the C++ front end where I can > > replace all references to the class type to the scalar types, but > > haven't yet found it. Any suggestions? > > cp_genericize? > > Though I'm not sure what to do about global variables... That's where I've been trying, but it doesn't touch all references to types. Is there a way to march through all of the nodes in a tree looking for types? Janis
Re: new libstdc++-v3 decimal failures
On Tue, 2009-10-06 at 09:04 -0400, Jack Howarth wrote: > Janis, >We are seeing failures of the new decimal testcases on > x86_64-apple-darwin10 > which you committed into the libstdc++-v3 testsuite... > > FAIL: decimal/binary-arith.cc (test for excess errors) > WARNING: decimal/binary-arith.cc compilation failed to produce executable > > Are these tests entirely glibc-centric and shouldn't they be disabled for > darwin? Each test contains // { dg-require-effective-target-dfp } which checks that the compiler supports modes SD, DD, and TD, which in turn are supported if ENABLE_DECIMAL_FLOAT is defined within the compiler. That should not be defined for darwin; I'll take a look. Janis
Re: new libstdc++-v3 decimal failures
On Tue, 2009-10-06 at 09:10 -0700, Janis Johnson wrote: > On Tue, 2009-10-06 at 09:04 -0400, Jack Howarth wrote: > > Janis, > >We are seeing failures of the new decimal testcases on > > x86_64-apple-darwin10 > > which you committed into the libstdc++-v3 testsuite... > > > > FAIL: decimal/binary-arith.cc (test for excess errors) > > WARNING: decimal/binary-arith.cc compilation failed to produce executable > > > > > Are these tests entirely glibc-centric and shouldn't they be disabled for > > darwin? > > Each test contains > > // { dg-require-effective-target-dfp } > > which checks that the compiler supports modes SD, DD, and TD, which > in turn are supported if ENABLE_DECIMAL_FLOAT is defined within the > compiler. That should not be defined for darwin; I'll take a look. I built a cross cc1plus for x86_64-apple-darwin10 and got the behavior I expected. From $objdir/gcc: elm3b149% fgrep -l ENABLE_DECIMAL_FLOAT *.h auto-host.h:#define ENABLE_DECIMAL_FLOAT 0 elm3b149% echo "float x __attribute__((mode(DD)));" > x.c elm3b149% ./cc1plus -quiet x.c x.c:1:33: error: unable to emulate ‘DD’ Please try that little test with your cc1plus to see if the problem is with your compiler not rejecting DD mode, or with the test framework not handling it correctly. Janis
Re: new libstdc++-v3 decimal failures
On Tue, 2009-10-06 at 18:19 -0400, Jack Howarth wrote: > On Tue, Oct 06, 2009 at 09:44:42AM -0700, Janis Johnson wrote: > > On Tue, 2009-10-06 at 09:10 -0700, Janis Johnson wrote: > > > On Tue, 2009-10-06 at 09:04 -0400, Jack Howarth wrote: > > > > Janis, > > > >We are seeing failures of the new decimal testcases on > > > > x86_64-apple-darwin10 > > > > which you committed into the libstdc++-v3 testsuite... > > > > > > > > FAIL: decimal/binary-arith.cc (test for excess errors) > > > > WARNING: decimal/binary-arith.cc compilation failed to produce > > > > executable > > > > > > > > > > > Are these tests entirely glibc-centric and shouldn't they be disabled > > > > for > > > > darwin? > > > > > > Each test contains > > > > > > // { dg-require-effective-target-dfp } > > > > > > which checks that the compiler supports modes SD, DD, and TD, which > > > in turn are supported if ENABLE_DECIMAL_FLOAT is defined within the > > > compiler. That should not be defined for darwin; I'll take a look. > > > > I built a cross cc1plus for x86_64-apple-darwin10 and got the behavior > > I expected. From $objdir/gcc: > > > > elm3b149% fgrep -l ENABLE_DECIMAL_FLOAT *.h > > auto-host.h:#define ENABLE_DECIMAL_FLOAT 0 > > > > elm3b149% echo "float x __attribute__((mode(DD)));" > x.c > > elm3b149% ./cc1plus -quiet x.c > > x.c:1:33: error: unable to emulate ‘DD’ > > > > Please try that little test with your cc1plus to see if the problem > > is with your compiler not rejecting DD mode, or with the test > > framework not handling it correctly. > > > > Janis > > Janis, >I find that ENABLE_DECIMAL_FLOAT is set to 0 on x86_64-apple-darwin10... > > [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# grep > ENABLE_DECIMAL_FLOAT auto-host.h > #define ENABLE_DECIMAL_FLOAT 0 > > and the test code fails to compile... > > [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# echo "float x > __attribute__((mode(DD)));" > x.c > [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# ./cc1plus -quiet x.c > x.c:1:33: error: unable to emulate ‘DD’ > > However, the testsuite failures still occurs as follows... > > Executing on host: > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/./gcc/g++ > -shared-libgcc > -B/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/./gcc -nostdinc++ > -L/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/src > > -L/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/src/.libs > -B/sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/bin/ > -B/sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/lib/ -isystem > /sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/include -isystem > /sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/sys-include -g -O2 -D_GLIBCXX_ASSERT > -fmessage-length=0 -ffunction-sections -fdata-sections -g -O2 -g -O2 > -DLOCALEDIR="." -nostdinc++ > -I/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/x86_64-apple-darwin10.0.0 > > -I/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include > > -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/libsupc++ > > -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/include/backward > > -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/util > > /sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/decimal/binary-arith.cc > -include bits/stdc++.h ./libtestc++.a -L/sw/lib -liconv -lm -o > ./binary-arith.exe(timeout = 600) > In file included from > /sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/decimal/binary-arith.cc:22:0: > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:39:2: > error: #error This file requires compiler and library support for ISO/IEC TR > 24733 that is currently not available. > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:228:56: > error: unable to emulate 'SD' > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:249:5: > error: > 'std::decimal::decimal32::decimal32(std::decimal::decimal32::__decfloat32)' > cannot be overloaded > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:236:14: > error: with 'std::decimal::decimal32::decimal32(float)' > > etc...for about a hundred errors. Doesn't this imply that the dejagnu test > harness isn't properly recognizing the absence of > the decimal support? Oh, maybe the libstdc++ tests don't support dg-require-effective-target. Janis