Re: GCC 4.1: Buildable on GHz machines only?
Richard Henderson writes: > On Fri, Apr 29, 2005 at 01:30:13PM -0400, Ian Lance Taylor wrote: > > I don't know of a way to tell libtool to not do duplicate compiles. > > You can use -prefer-pic, but at least from looking at the script it > > will still compile twice, albeit with -fPIC both times. > > Incidentally, libtool does not compile things twice when you use > convenience libraries. And the entirity of libgcj is built via > a convenience library these days. > > So we are not, in fact, building everything twice. You know, I didn't even think to check, and you're absolutely right. OK, so the low-hanging fruit that remains is the libtools script and the linker. In the latter case, it seems that the big link causes severe problems with small memory systems. Andrew.
Re: GCC 4.1: Buildable on GHz machines only?
Matt Thomas writes: > Joe Buck wrote: > > I think you need to talk to the binutils people. It should be possible > > to make ar and ld more memory-efficient. > > Even though systems maybe demand paged, having super large > libraries that consume lots of address space can be a problem. > > I'd like to libjava be split into multiple shared libraries. In C, > we have libc, libm, libpthread, etc. In X11, there's X11, Xt, etc. > So why does java have everything in one shared library? Could the > swing stuff be moved to its own? Are there other logical > divisions? It might be nice, certainly. However, there are some surprising dependencies between parts of the Java library, and these would cause separate shared libraries to depend on each other, negating most of the advantage of separation. We are in the process of rewriting the Java ABI so that sumbol resolution in libraries is done lazily rather than eagerly. This will help. Even so, I would prefer to divide libjava -- if it is to be divided -- on a logical basis rather than simply in order to make libraries smaller. Andrew.
Re: GCC 4.0, Fast Math, and Acovea
Uros Bizjak wrote: Hello Scott! Specifically, the -funsafe-math-optimizations flag doesn't work correctly on AMD64 because the default on that platform is -mfpmath=sse. Without specifying -mfpmath=387, -funsafe-math-optimizations does not generate inline processor instructions for most floating-point functions. Let's put it another way: Manually selecting -mfpmath=387 cuts run-times by 50% for programs dependent on functions like sin() and sqrt(), as compared to -funsafe-math-optimizations by itself. It was found that moving data from SSE registers to X87 registers (and back) only to call an x87 builtin degrades performance. Because of this, x87 builtins are disabled for -mfpmath=sse and a normal libcall is issued for sin(), etc functions. If someone wants to use x87 builtins, then _all_ math operations should be done in x87 registers to avoid costly SSE->x87 moves. BTW: Does adding -D__NO_MATH_INLINES improve performance for -mfpmath=sse? That would be PR19602. Uros. Well, on every function-intensive (i.e., using lots of sqrt(), sin(), and such) program I've tried, using -funsafe-math-optimizations provides not significant benefit on the Opteron *unless* it is combined with -mfpmath=387. I note that Intel and other compilers do not seem to have this problem. Now, I'm more than happy to live with the situation, since it has a simple work-around -- but I think it at least needs to be made clear in the GCC documentation that this situation exists. Otherwise, GCC 4.0 looks *terribly* for many mathematical tasks on AMD64. And right now, AMD64 is a hot property in the mathematical circles, especially in clustered supercomputing. ..Scott
Re: GCC 4.0, Fast Math, and Acovea
On 4/29/05, Uros Bizjak <[EMAIL PROTECTED]> wrote: > Hello Scott! Hello Scott & Uros, > > Specifically, the -funsafe-math-optimizations flag doesn't work > > correctly on AMD64 because the default on that platform is > > -mfpmath=sse. Without specifying -mfpmath=387, > > -funsafe-math-optimizations does not generate inline processor > > instructions for most floating-point functions. [snip] > It was found that moving data from SSE registers to X87 registers (and > back) only to call an x87 builtin degrades performance. Because of this, > x87 builtins are disabled for -mfpmath=sse and a normal libcall is > issued for sin(), etc functions. If someone wants to use x87 builtins, > then _all_ math operations should be done in x87 registers to avoid > costly SSE->x87 moves. Shameless plug with my own performance analysis regarding SSE on x86-64. I've ported my coherent raytracer which mostly uses intrinsics in the hot path (and no transcendentals). While gcc4.x compiled binaries are ~5% slower than those compiled with icc8.1 on ia32 (best case), it's the other way around on x86-64 if not more (on my opteron with icc8.1 and beta 9.0). Obviously there's much less pressure on the (cough weak cough) register allocator and in the end the generated code is way leaner. My only gripe with fast-math is that it's the only way to enable some optimizations while making NaNs verbotten; couple that with the lack of cross unit IPO and you're stuck with a kind of nasty "global" switch (unless you have room for some function calls).
PPC 64bit library status?
For what are probably misguided reasons I am trying to build Apple style compilers which include gfortran, libffi and libobjc. This was not a particular problem with the latest apple-ppc-branch sources (apple-ppc-5013) until I got MacOS X 10.4 Tiger yesterday. On Darwin8/MacOS X 10.4 the Apple gcc build includes both 32 and 64 bit versions of libgcc, libiberty and libstdc++. When I alter the build_gcc script and the configure files to build gfortran, libffi and libobjc, the build tries to make 64 bit versions of libgfortran, libgfortranbegin, libffi and libobjc-gnu. There are a number of problems: 1. Since I am using a PPC7455 based computer 64bit executables won't run and the 64 bit libraries are effectively cross compilations. So the configure scripts need the same APPLE LOCAL mods used in libstdc+ + to avoid testing in the configure script whether the compiler can build executables. (with the -m64 flag the executables are built but they won't run). This would not be an issue on a PPC970(G5) cpu. 2. libgfortran.h line 63 defines int8_t. This is already defined in / usr/lib/ppc/types.h. So I think the libgfortran.h define needs to be conditional on _INT8_H. Even if the libraries build, will libffi or libobjc work on 64 bit PPC Since I don't have access to a 64 bit PPC machine I cannot test this. There is an even murkier question about what happens with Darwin 8 on x86. Bill Northcott
Re: libjava/3.4.4 problem
Andrew Pinski dixit: >> Does anyone have an idea where to look? > > This is a bug in your config, you forgot to define NO_IMPLICIT_EXTERN_C. Thanks a lot, I will try that after I updated to 20050429. bye, //mirabile
Re: Backporting to 4_0 the latest friend bits
Joe Buck wrote: >I don't quite understand your answer. It seems that (a) is the important >issue; if the programs are valid, they compiled before, and they worked >before, then it seems there really is a regression, even if we can argue >that we were "right by accident" in the past. > This is a clarification of code validity issue: using sample code from PR21235: class KopeteAwayDialog { KopeteAwayDialog(); }; namespace Kopete { class Away { friend class KopeteAwayDialog; }; } using namespace Kopete; KopeteAwayDialog::KopeteAwayDialog(){} Sure, this code compiles with 4.1 and 3.4 but doesn't compile with 4.0. Although the code is valid, I'd bet it doesn't work the way the programmer of the above code (or other 99% who doesn't track the standard closely) would expect. In the above code, according to 4.0 and 4.1, there are actually two 'KopeteAwayDialog' classes: 'Away::KopeteAwayDialog' and '::KopeteAwayDialog'. Only the former is a friend of class 'Away'. The later, with class and constructor defined by programmer, still cannot access any private/protected member of 'Away'. Simple fix is changing the friend declaration to: friend class ::KopeteAwayDialog; which is what programmer intend and should work on all versions of GCC, new or old. --Kriang
Re: PPC 64bit library status?
Bill Northcott <[EMAIL PROTECTED]> writes: > Even if the libraries build, will libffi or libobjc work on 64 bit > PPC Since I don't have access to a 64 bit PPC machine I cannot > test this. They appear to work fine on powerpc64-linux. Andreas. -- Andreas Schwab, SuSE Labs, [EMAIL PROTECTED] SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
Re: PPC 64bit library status?
On Apr 30, 2005, at 9:46 AM, Andreas Schwab wrote: Bill Northcott <[EMAIL PROTECTED]> writes: Even if the libraries build, will libffi or libobjc work on 64 bit PPC Since I don't have access to a 64 bit PPC machine I cannot test this. They appear to work fine on powerpc64-linux. He is talking about ppc64-darwin which has almost no support right now. There was some talk about this earlier this year and then the support for building the 64bit libraries on darwin was turned off for both the 4.0.0 release and on the mainline. Note why again are you using Apple's branch. It does not get all fixes which the 4.0 release branch will get. -- Pinski
Re: GCC 4.0, Fast Math, and Acovea
On Fri, 29 Apr 2005, Scott Robert Ladd wrote: > I've been down (due to illness) for a couple of months, so I don't know > if folk here are aware of something I discovered about GCC 4.0 on AMD64: > -ffast-math is "broken" on AMD64/x86_64. Hi Scott, I was wondering if you could do some investigating for me... The change in GCC was made following the observation that given operands in SSE registers, it was actually faster on x86_64 boxes to call the optimized SSE implementations of intrinsics in libm, than to shuffle the SSE registers to x87 registers (via memory), invoke the x87 intrinsic, and then shuffle the result back from the x87 registers to SSE registers (again via memory). See the threads at http://gcc.gnu.org/ml/gcc-patches/2004-11/msg01877.html http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02119.html (Your benchmarking with acovea 4 is even quotetd in http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02154.html) Not only are the recent libm implementations faster than x87 intrinsics, but they are also more accurate (in terms of ulp). This helps explains why tbp reported that gcc is faster than icc8.1 on opteron, but slower than it ia32 (contradiciting your observations). Of course, the decision to disable x87 intrinsics with (the default) -fpmath=sse on x86_64 is predicated on a number of requirements. These include that the mathematical intrinsics are implemented in libm using fast SSE implementations with arguments and results being passed and returned in SSE registers (the TARGET64 ABI). If this isn't the case, then you'll see the slowdowns you're seeing. Could you investigate if this is the case? For example, which OS and version are you using? And what code is being generated for: double test(double a, double b) { return sin(a*b); } One known source of problem is old system headers for , where even on x86_64 targets and various -fpmath=foo options the header files insist on using x87 intrinsics, forcing the compiler to shuffle registers by default. As pointed out previously, -D__NO_MATH_INLINES should cure this. Thanks in advance, Roger -- Roger Sayle, E-mail: [EMAIL PROTECTED] OpenEye Scientific Software, WWW: http://www.eyesopen.com/ Suite 1107, 3600 Cerrillos Road, Tel: (+1) 505-473-7385 Santa Fe, New Mexico, 87507. Fax: (+1) 505-473-0833
GCC-3.3.6 release status
Hi, GCC-3.3.6 appears in pretty good shape. There were 36 PRs open against it (as of this morning, 6am, GMT-05). I wnet through all of them and the appeared to be very minor or impossible bugs to fix in 3.3.6 major restructuring (typically, they were bugs fixed in 3.4.x or 4.0.x). Two of them were pretty simple to handle. Others were closed as "fixed in 3.4.x, won't fix in 3.3.6". Except this: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19579 for which I would appreicate inputs from both the author and Roger. The pre-release process is running and a candidate release will be available, within hours, at the usual local for testing. I do not anticipate other pre-releases. Thanks -- Gaby
Re: GCC-3.3.6 release status
On 30 Apr 2005, Gabriel Dos Reis wrote: > There were 36 PRs open against it (as of this morning, 6am, GMT-05). > I wnet through all of them and the appeared to be very minor or > impossible bugs to fix in 3.3.6 major restructuring (typically, they > were bugs fixed in 3.4.x or 4.0.x). Two of them were pretty simple to > handle. Others were closed as "fixed in 3.4.x, won't fix in 3.3.6". > Except this: >http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19579 > for which I would appreicate inputs from both the author and Roger. My apologies for the delay. Yes, this should be safe for the 3.3 release branch, as mentioned in the original approval: http://gcc.gnu.org/ml/gcc-patches/2005-01/msg01714.html. The cause of my delay was to investigate whether the code on the gcc-3_3-branch was significantly different or whether there was some other reason why this fix hadn't yet been committed there. If I get a few hours, and nobody beats me to it, I'll try a bootstrap and regression testing a backport of Jakub's fix myself. Roger --
Re: building gcc 4.0.0 on Solaris
> 1) Using Sun's as/ld. I build gcc this way: > cd /tmp > gtar xjf ~/gcc-4.0.0.tar.bz2 > mkdir gcc > cd gcc > setenv CONFIG_SHELL /bin/ksh > /tmp/gcc-4.0.0/configure \ > --prefix=/usr/local/gcc-4.0.0 > gmake bootstrap > > The build fails with the following message: > ld: fatal: relocation error: R_SPARC_DISP32: > file .libs/libstdc++.lax/libsupc++convenience.a/vterminate.o: > symbol : offset 0xfccd33ad is non-aligned Probably a Sun 'as' bug, a similar problem was reported on Solaris 7: http://gcc.gnu.org/install/specific.html GCC 4.0.0 is known to bootstrap on Solaris 8 with: as: Sun WorkShop 6 03/08/05 Compiler Common 6.0 Patch 114802-01 ld: Software Generation Utilities - Solaris Link Editors: 5.8-1.285 > Would it be possible to document this requirement in the platform pages? > http://gcc.gnu.org/install/specific.html#sparc-sun-solaris2 It is already documented in *-*-solaris2* that 2.15 is broken on the platform. > Speaking of the sparc-sun-solaris2* platform, I read: > GCC 3.4 changed the default debugging format from STABS > to DWARF-2 for 32-bit code on Solaris 7 and later. If you > are using the Sun assembler, this change apparently runs > afoul of Sun bug 4910101, for which (as of 2004-05-23) > there is no fix. A symptom of the problem is that you > cannot compile C++ programs like groff 1.19.1 without > getting messages similar to the following: > ld: warning: relocation error: R_SPARC_UA32: ... > external symbolic relocation against non-allocatable > section > .debug_info cannot be processed at runtime: > relocation ignored. > To work around this problem, compile with -gstabs+ > instead of plain -g > > I had a look at Sun Online Support and found: > Bug ID: 4910101 > Synopsis: fbe needs a way to reference section labels > Category: compiler > Subcategory: assembler-x86 > Description: > The bug was first found with the -misalign flag, but the > real bug is because the code is being passed through the > assembler. > Integrated in releases: k2_dev > Summary: fbe needs a way to reference section labels > > Therefore : > 1) This seems to be x86-specific, so I would suggest moving this > paragraph from sparc-sun-solaris2* to i?86-*-solaris2* The problem is present on SPARC so the paragraph can't be moved. Not sure whether the bug ID is correct though. -- Eric Botcazou
Re: GCC 4.1: Buildable on GHz machines only?
Lars Segerlund <[EMAIL PROTECTED]> wrote: > I have to agree with Richard's assessment, gcc is currently on the > verge of being unusable in many instances. > If you have a lot of software to build and have to do complete > rebuilds it's painful, the binutils guys have a 3x speedup patch > coming up, but every time there is a speedup it gets eaten up. This is simply not true. Most of the benchmarks we have seen posted to the gcc mailing lists show that GCC4 can be much faster than GCC3 (especially on C++ code). There are of course also regressions, and we are not trying to hide them. Did you *ever* provide a preprocessed source code which shows a compile-time regression? If not, please do NOT hesitate! Post it in Bugzilla. People that did it in the past are mostly satisfied by how GCC developers improved things for them. Otherwise, I do not want to sound rude, but your posts seem more like trolling to me. I am *ready* to admit that GCC4 is much slower than GCC3 or GCC2, but I would like to do this in front of real measurable data, not just random complaints and told legends. Thus, I am really awaiting your preprocessed testcases which prove your points. Please. Giovanni Bajo
Re: GCC 4.1: Buildable on GHz machines only?
Jason Thorpe <[EMAIL PROTECTED]> wrote: >> Maybe the older platform should stick to the older compiler then, >> if it is too slow to support the kind of compiler that modern >> systems need. > > This is an unreasonable request. Consider NetBSD, which runs on new > and old hardware. The OS continues to evolve, and that often > requires adopting newer compilers (so e.g. other language features > can be used in the base OS). > > The GCC performance issue is not new. It seems to come up every so > often... last time I recall a discussion on the topic, it was thought > that the new memory allocator (needed for pch) was cause cache-thrash > (what was the resolution of that discussion, anyway?) There is no outcome, because it is just the Nth legend. Like people say "I believe GCC is slow because of pointer indirection" or stuff like that. Please, provide preprocessed sources and we *will* analyze them. Just file a bugreport in Bugzilla, it took 10 minutes of your time. Giovanni Bajo
Re: GCC 4.1: Buildable on GHz machines only?
Richard Earnshaw <[EMAIL PROTECTED]> wrote: >> The GCC build times are not unreasonable compared to other, >> commercial compilers with similar functionality. And the GCC >> developers >> ave plans to address inefficiencies -- GCC 4.0 often is faster than >> GCC >> 3.4. > > If you are going to make sweeping statements like this you need to > back them up with hard data. It is surely faster for C++ code thanks to the work done by Codesourcery on the lexing upfront and the name lookup. "Faster" here means 20-30% faster, so it's not 1% or something like that. There are many posts on gcc@ that show this, I can dig them up in the archive for you if you want, but I'm sure you can use google as well as I do. Two of them are very recente (see Karel Gardas' post on this thread, and the recent benchmark posted by Rene Rebe). Giovanni Bajo
Re: Built gcc 4.0.0, without C++ support
> I configured/made/installed gcc 4.0.0 partially on a Solaris host. I > could not build with C++ support, because ld (GNU ld, that is) choked > (dumped core, signal 11, segmentation violation) on abi_check (see > below). > When using the Sun-supplied as and ld, ld chokes on alignment errors > during bootstrap. Did you read http://gcc.gnu.org/install/specific.html? There are known problems in some versions of the GNU tools and some versions of the Sun tools. Please provide more detailed info. -- Eric Botcazou
Re: GCC 4.1: Buildable on GHz machines only?
Ian Lance Taylor wrote: >> Except it's not just bootstrapping GCC. It's everything. When the >> NetBSD Project switched from 2.95.3 to 3.3, we had a noticeably >> increase in time to do the "daily" builds because the 3.3 compiler >> was so much slower at compiling the same OS source code. And we're >> talking almost entirely C code, here. > > Well, there are two different issues. Matt was originally talking > about bootstrap time, at least that is how I took it. You are talking > about speed of compilation. The issues are not unrelated, but they > are not the same. > > The gcc developers have done a lot of work on speeding up the compiler > for 3.4 and 4.0, with some success. On many specific test cases, 4.0 > is faster than 3.3 and even 2.95. The way to help this process along > is to report bugs at http://gcc.gnu.org/bugzilla. > > In particular, if you provide a set of preprocessed .i files, from, > say, sys, libc, or libcrypto, whichever seems worst, and open a gcc PR > about them, that would be a great baseline for measuring speed of > compilation, in a way that particularly matters to NetBSD developers. I would also like to note that I *myself* requested preprocessed source code to NetBSD developers at least 6 times in the past 2 years. I am sure Andrew Pinski did too, a comparable amound of times. These requests, as far as I can understand, were never answered. This also helped building up a stereotype of the average NetBSD developer being "just a GCC whine troll". I am sure this is *far* from true, but I would love to see NetBSD developers *collaborating* with us, especially since what we are asking (filing bug reports with preprocessed sources) cannot take more than 1-2 hours of their time. Giovanni Bajo
PATCH: Speed up AR for ELF
We are calling _bfd_elf_get_sec_type_attr on sections from input files. It is not necessary at all. This patch should speed up AR for ELF. H.J. --- 2005-04-30 H.J. Lu <[EMAIL PROTECTED]> * elf.c (_bfd_elf_new_section_hook): Don't call _bfd_elf_get_sec_type_attr on sections from input files. --- bfd/elf.c.speed 2005-04-29 23:30:28.0 -0700 +++ bfd/elf.c 2005-04-30 10:23:02.0 -0700 @@ -2251,12 +2251,18 @@ _bfd_elf_new_section_hook (bfd *abfd, as sec->used_by_bfd = sdata; } - elf_section_type (sec) = SHT_NULL; - ssect = _bfd_elf_get_sec_type_attr (abfd, sec->name); - if (ssect != NULL) -{ - elf_section_type (sec) = ssect->type; - elf_section_flags (sec) = ssect->attr; + /* When we read a file, we don't need to section type and flags. + They will be overridden in _bfd_elf_make_section_from_shdr + anyway. */ + if (abfd->direction != read_direction) +{ + elf_section_type (sec) = SHT_NULL; + ssect = _bfd_elf_get_sec_type_attr (abfd, sec->name); + if (ssect != NULL) + { + elf_section_type (sec) = ssect->type; + elf_section_flags (sec) = ssect->attr; + } } /* Indicate whether or not this section should use RELA relocations. */
PATCH: Speed up ELF section merge
The default BFD hash table size is too small for ELF section merge. This patch speeds up sec_merge_hash_lookup by 50% in average. H.J. --- 2005-04-30 H.J. Lu <[EMAIL PROTECTED]> * hash.c (hash_size_primes): Add 65537. * merge.c (sec_merge_init): Call bfd_hash_set_default_size to set hash table size to 16699. --- bfd/hash.c.hash 2005-03-22 17:25:46.0 -0800 +++ bfd/hash.c 2005-04-30 12:25:59.0 -0700 @@ -492,7 +492,7 @@ bfd_hash_set_default_size (bfd_size_type /* Extend this prime list if you want more granularity of hash table size. */ static const bfd_size_type hash_size_primes[] = { - 1021, 4051, 8599, 16699 + 1021, 4051, 8599, 16699, 65537 }; size_t index; --- bfd/merge.c.hash2005-04-14 10:51:43.0 -0700 +++ bfd/merge.c 2005-04-30 12:31:22.0 -0700 @@ -241,6 +241,8 @@ sec_merge_init (unsigned int entsize, bf if (table == NULL) return NULL; + bfd_hash_set_default_size (16699); + if (! bfd_hash_table_init (&table->table, sec_merge_hash_newfunc)) { free (table);
Slow _bfd_strip_section_from_output
_bfd_strip_section_from_output is known to be very slow: http://sourceware.org/ml/binutils/2005-03/msg00826.html It scans every section in all input files to find a match. We do have link_order_head/link_order_tail. But they aren't available when _bfd_strip_section_from_output is called. Can we build them when we assign the output sections and fix them up before we need them for output? H.J.
GCC-3.3.6 prerelease for testing
Hi, The prerelease tarballs for GCC-3.3.6 are avaliable at ftp://gcc.gnu.org/pub/gcc/prerelease-3.3.6-20050430/ for testing. Please download test, and fill bugzilla PRs (putting me in the CC: field). This branch has been stable for long time. GCC-3.3.6 will be the last of the 3.3.x series. Thanks, -- Gabriel Dos Reis [EMAIL PROTECTED] Texas A&M University -- Department of Computer Science 301, Bright Building -- College Station, TX 77843-3112
Re: GCC 4.1: Buildable on GHz machines only?
On 30 Apr 2005, Giovanni Bajo uttered the following: > There is no outcome, because it is just the Nth legend. Like people say "I > believe GCC is slow because of pointer indirection" or stuff like that. Don't we have actual *evidence* that it's slow because of cache thrashing? -- `End users are just test loads for verifying that the system works, kind of like resistors in an electrical circuit.' - Kaz Kylheku in c.o.l.d.s
FAIL: ext/stdio_sync_filebuf/wchar_t/12077.cc
Hi, it looks like in mainline this test recently started failing at compile time on some machines. I'm puzzled, unfortunately cannot reproduce the problem and would be grateful is someone could send me (either privately or in public) more information (e.g., an extract from libstdc++.log, at least). Thanks in advance, Paolo.
Re: ext/stdio_sync_filebuf/wchar_t/12077.cc
it looks like in mainline this test recently started failing at compile time on some machines. I'm puzzled, unfortunately cannot reproduce the problem and would be grateful is someone could send me (either privately or in public) more information (e.g., an extract from libstdc++.log, at least). At FreeBSD 5.3 this testcase failed with gcc version 4.1.0 20050429 (extracted log attached) Vladimir 12077.log Description: Binary data
Re: ext/stdio_sync_filebuf/wchar_t/12077.cc
Vladimir A. Merzliakov wrote: > At FreeBSD 5.3 this testcase failed with gcc version 4.1.0 20050429 > (extracted log attached) Ah! Thanks a lot Vladimir: now it's also obvious why I'm not seeing it: often, while working hard on the library, I don't update the compiler proper every day. I'm not sure whether we should file a specific PR for this ICE: people working on compiler patches are supposed to regtest the library too... ;) Thanks again, Paolo.
gcc-4.0-20050430 is now available
Snapshot gcc-4.0-20050430 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.0-20050430/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.0 CVS branch with the following options: -rgcc-ss-4_0-20050430 You'll find: gcc-4.0-20050430.tar.bz2 Complete GCC (includes all of below) gcc-core-4.0-20050430.tar.bz2 C front end and core compiler gcc-ada-4.0-20050430.tar.bz2 Ada front end and runtime gcc-fortran-4.0-20050430.tar.bz2 Fortran front end and runtime gcc-g++-4.0-20050430.tar.bz2 C++ front end and runtime gcc-java-4.0-20050430.tar.bz2 Java front end and runtime gcc-objc-4.0-20050430.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.0-20050430.tar.bz2The GCC testsuite Diffs from 4.0-20050423 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.0 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: GCC 4.1: Buildable on GHz machines only?
On Apr 30, 2005, at 12:33 PM, Giovanni Bajo wrote: I would also like to note that I *myself* requested preprocessed source code to NetBSD developers at least 6 times in the past 2 years. I am sure Andrew Pinski did too, a comparable amound of times. These requests, as far as I can understand, were never answered. This also helped building up a stereotype of the average NetBSD developer being "just a GCC whine troll". While I have not had much time for a quite a while to work on GCC myself, I am listed as NetBSD maintainer... you can always drop me a note directly when this sort of thing happens. -- thorpej
Re: PPC 64bit library status?
On 01/05/2005, at 1:11 AM, Andrew Pinski wrote: Even if the libraries build, will libffi or libobjc work on 64 bit PPC Since I don't have access to a 64 bit PPC machine I cannot test this. There was some talk about this earlier this year and then the support for building the 64bit libraries on darwin was turned off for both the 4.0.0 release and on the mainline. That much was sort of obvious. However, if they are enabled in the build, libobjc and libgfortran do build. Are they likely to be functional? Libffi does not build using the -m64 flag on the compiler. Note why again are you using Apple's branch. It does not get all fixes which the 4.0 release branch will get. Basically my logic is fairly simple. If I use Apple's OS, I feel happier using their compiler. While that creates some problems, it avoids others like the restFP,savFP issue. The code I am using was synced to the release gcc-4.0.0 on 20 April. So it is not that out of date. Bill Northcott
Re: PPC 64bit library status?
On Apr 30, 2005, at 10:51 PM, Bill Northcott wrote: Basically my logic is fairly simple. If I use Apple's OS, I feel happier using their compiler. While that creates some problems, it avoids others like the restFP,savFP issue. The restFP/saveFP issue has been resolved since 2004-03-31 Andrew Pinski <[EMAIL PROTECTED]> * config/rs6000/t-darwin (LIB2FUNCS_STATIC_EXTRA): Add darwin-fpsave.asm, darwin-vecsave.asm, and darwin-world.asm. (TARGET_LIBGCC2_CFLAGS): Add -Wa,-force_cpusubtype_ALL as the asm files contain altivec instructions. * config/rs6000/darwin-fpsave.asm: New file. * config/rs6000/darwin-vecsave.asm: New file. * config/rs6000/darwin-world.asm: New file. -- Pinski
libjava build times
On Sat, Apr 30, 2005 at 10:33:43AM +0100, Andrew Haley wrote: > OK, so the low-hanging fruit that remains is the libtools script and > the linker. In the latter case, it seems that the big link causes > severe problems with small memory systems. I did some experiments today just to see what kind of time it actually takes to compile the actual objects, and thus how much time is on the table to be retrieved from libtool. The following was performed on a 2.3Ghz G5, with 2G of ram. So I'm not swapping and in fact everything can reside in cache. I.e. just about the ideal setup. There are in fact two cpus, but I'm not using the -j option to make at all. I began by building the whole of libjava, and then using find to delete all of *.o *.lo *.a *.la. I then timed rebuilding the library: 2248.43user 661.42system 47:46.01elapsed 101%CPU (0major+47501310minor) Next, I canibalized the makefile in order to bypass libtool, and invoke gcc directly. My solution does assume gnu ld and ar, but this is just a test after all. -O2 -fPIC compile 824.80user 86.88system 15:11.69elapsed 99%CPU (0major+7102491minor) .so link + .a create 10.45user 9.59system 0:19.97elapsed 100%CPU (0major+851815minor) Now, unless I've done something drastically wrong, it appears as if we are spending 2/3 of our time in the libtool script. Test makefiles attached for the record. r~ test-make.tar.gz Description: GNU Zip compressed data