is this a good time to commit a patch on builtin_function?
I have an approved patch that factors code that is common to all builtin_function implementations (http://gcc.gnu.org/ml/gcc-patches/2006-03/msg00195.html, http://gcc.gnu.org/ml/gcc-patches/2006-06/msg01499.html). I have just updated and tested it. Is this a good time to commit? Best Regards, Rafael
bootstrap failure on arm
Do we support bootstraping on ARM? I am trying to bootstrap inside a scratchbox and currently I am having the following error on trunk and the 4.2 branch: ../branch-4.2/configure --enable-checking --disable-threads --disable-shared --enable-languages=c --disable-libmudflap arm-none-linux-gnueabi /home/rafael/llvm/gcc/branch-4.2/gcc/config/arm/unwind-arm.c: In function 'unwind_phase2_forced': /home/rafael/llvm/gcc/branch-4.2/gcc/config/arm/unwind-arm.c:587: error: could not split insn (insn/f 212 211 213 /home/rafael/llvm/gcc/branch-4.2/gcc/config/arm/unwind-arm.c:509 (set (reg/f:SI 13 sp) (plus:SI (reg/f:SI 13 sp) (const_int -616 [0xfd98]))) 4 {*arm_addsi3} (nil) (nil)) /home/rafael/llvm/gcc/branch-4.2/gcc/config/arm/unwind-arm.c:587: internal compiler error: in final_scan_insn, at final.c:2449 Best regards, Rafael
Re: bootstrap failure on arm
reduced test case: -- typedef void (*personality_routine) ( void *); typedef struct { unsigned vfp[63]; } phase1_vrs; void __gnu_Unwind_RaiseException (unsigned * ucbp); void __gnu_Unwind_RaiseException (unsigned * ucbp) { phase1_vrs saved_vrs; ((personality_routine) (ucbp[0])) ((void *) &saved_vrs); } -
Re: bootstrap failure on arm
Compiling with --disable-bootstrap and using the resulting compiler to bootstrap gcc solved the problem. Rafael
backporting arm eabi to 4.0
I am working on a ARM backend for LLVM. The problemis that llvm-gcc is currently based on gcc 4.0 and I would like to use the new EABI. I have started to back port the ABI from 4.1 to 4.0. The first attempt was to just copy the gcc/config/arm directory and try to fix build errors. This proved hard because the new files depend on new infrastructure. What I am trying right now is to build a list of patches that are in the 4.1 branch but not in the 4.0. Then I will try to selectively move then to 4.0. Does someone has a better idea or some suggestions on how to do this? Thanks, Rafael
Re: backporting arm eabi to 4.0
I thought Chris was working on updating LLVM to gcc head. It will be done, but it is not a priority and I need the ARM bits sooner. Anyway, he will have 5 or 6 patches less to port :-) Paul Best Regards, Rafael
Re: About Gcc tree tutorials
Also, I referred to some tutorials and articles in the net about writing gcc front-end. And here are they: 1. http://en.wikibooks.org/wiki/GNU_C_Compiler_Internals/Print_version 2. http://www.faqs.org/docs/Linux-HOWTO/GCC-Frontend-HOWTO.html (old) 3. http://www.linuxjournal.com/article/7884 (overview) 4. http://www.eskimo.com/~johnnyb/computers/toy/cobol_14.html A bit out of date, but may be useful: http://svn.gna.org/viewcvs/gsc/branches/hello-world/ Best Regards, Rafael
Re: Seeking advice on front ends
Did I miss anything? What are the relative advantages of each solutions? Do you think that I overlooked other options? Would using an exiting virtual machine be a good option? Except for Nice, this option doesn't seem to be popular; there must be a catch. You might want to have a look at the LLVM project. It has an intermediate representation that is a bit simpler then the one used by GCC and can be dumped to disk. Rafael
Re: The tree API
On 6/16/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > > > > > Hello~ every one :) > > I'm a new guy in gcc mailing list > I've been studying gcc for 2 months. > I read "GNU compiler collection internals" (for GCC 3.5.0?), > and I also trace the source code for target-mips. > My problem is there are so much symbol/function/API in gcc. > Some are documentated in the book but not all of them. > All I can do is modified it, rebuild and see what's happend. > This approach is not effective. > If there is a way to learn the gcc internal APIs "systematically"? I think that I can help since I am having the same problem :) The best references that I can give you is the GCC Frontend HOWTO at http://www.tldp.org/HOWTO/GCC-Frontend-HOWTO.html. It is based on a old gcc, but most of the tree structure still applies. Two colleges and I are working on a minimal "hello world" front end for gcc. You can find a alpha version at http://ltc08.ic.unicamp.br/svn/scheme/branches/hello-world I also recommend that you take a look at treelang. It is quite simpler than the others frontends in gcc. I hope this helps, Rafael Ávila de Espíndola
Re: add sourcefiles to gcc
On 6/22/05, nico <[EMAIL PROTECTED]> wrote: > Hi, > > if I want to add some source to the gcc, what do I have to do? It depends where do you want to link the file. If is going into a front end for a language other the c then it must go in gcc/ I am not very familiar with the global structure of gcc, but I am sure that someone can help you if you describe what will be in the file. > Best regards, NM Rafael
Re: add sourcefiles to gcc
> It should be part of the middle end. A kind of dump-option. much better :) take a look at tree-dump.c:dump_function > Nico Rafael
coding style: type* variable or type *varible
I have seen both in gcc. I have found that "type* variable" is preferred in C++ code but I haven't found any guidelines for C code. Thanks, Rafael
Re: coding style: type* variable or type *varible
On 9/13/05, Mike Stump <[EMAIL PROTECTED]> wrote: > If you ask gcc, you find: > > mrs $ grep 'int\* ' *.c | wc -l > 4 > mrs $ grep 'int \*' *.c | wc -l > 369 > > pretty clear to me. In treelang/parse.y all variables named "tok" (and some others) are declared with struct prod_token_parm_item* tok; Thanks, Rafael
Re: @true in makefiles
> The cases you have to look at are the ones with move-if-change. You > have to make sure that if the file does not change, none of the > dependencies are considered to have changed. > > For example in > > multilib.h: s-mlib; @true > s-mlib: $(srcdir)/genmultilib Makefile > ... > $(SHELL) $(srcdir)/../move-if-change tmp-mlib.h multilib.h > $(STAMP) s-mlib > > we need the property that if multilib.h changes, everything that > depends upon it gets rebuilt. If genmultilib or Makefile changes, we > must try rebuilding multilib.h. However, if multilib.h does not then > change, then things that depend upon multilib.h should not get rebuilt > for that reason. > > The @true is necessary so that make checks whether the timestamp of > multilib.h changed before deciding that it will rebuild things that > depend upon multilib.h. > > Can you restate your plan based on that, showing examples which use > move-if-change? Thanks. Unfortunately no :( but I think that I can document that :) I will make a small example and add a comment to Makefile.in describing the use of stamps with mkconfig.sh and move-if-change. One small improvement would be to create the dependencies on s-gtype automatically. At least the Make-lang.in s wouldn't need to know about the stamps. Maybe I can implement this. > Ian Thank you very much, Rafael
[Treelang] flag_signed_char
Why does treelang defines signedness of char with flag_signed_char? IMHO it would be better if it had a fixed definition of it. I have tried to use build_common_tree_nodes (true, false); It bootstraped and tested (make check-treelang). Thanks, Rafael 2005-10-25 Rafael Ávila de Espíndola <[EMAIL PROTECTED]> * gcc/treelang/treetree.c: char is always signed treelang-char-sign.patch Description: Binary data
Re: Thoughts on LLVM and LTO
> The initial impression I get is that LLVM involves starting from scratch. > I don't quite agree that this is necessary. One of the engineering > challenges we need to tackle is the requirement of keeping a fully > functional compiler *while* we improve its architecture. I don't think that it involves starting from scratch. If we write a LLVM -> GIMPLE converter the compilation process can look like GENERIC -> GIMPLE -> LLVM -> GIMPLE -> RTL In a first stage nothing will be done with the LLVM representation except convert it back to GIMPLE. This will make sure that all necessary information (including debug) can pass through the LLVM. The conversion will also receive very good testing with this. Latter the optimizations can be moved one by one and in a last stage the backend can also be replaced to work directly with LLVM. This has the advantage that only the last stage of the port is architecture dependent. Rafael
Re: The actual LLVM integration patch
On 11/22/05, Chris Lattner <[EMAIL PROTECTED]> wrote: > This is a patch vs the Apple branch as of a few weeks ago. The diff is in > gcc.patch.txt, the new files are included in the tarball. apple-local-200502-branch rev 104970 I think. Rafael
Re: should _GNU_SOURCE be used in libiberty if glibc is present?
> I am currently bootstraping the trunk with the patch applied. bootstraped and tested... > Thanks, > Rafael
Re: Thoughts on LLVM and LTO
On 11/22/05, Scott Robert Ladd <[EMAIL PROTECTED]> wrote: > I've been quietly watching the conversation, largely as an interested > user as opposed to a GCC developer. One of my concerns lies with: I have worked on some toy front ends, so I think that I am a kind of a user also :) > GENERIC -> GIMPLE -> LLVM -> GIMPLE -> RTL > > That design adds two phases (GIMPLE -> LLVM, LLVM -> GIMPLE) here -- > perhaps simple one, perhaps not. The line is very straight, but adding > two more segments make me wonder if we're complicating the plumbing. > > How will this effect compiler speed? It is hoped that optimizing in LLVM will be faster than optimizing in GIMPLE. So optimized builds are likely to be faster. > How will debugging information flow accurately through the process? I think that this is an open issue. The major technical one. > And will we be making it even more difficult to isolate problems? Not in the LLVM part. If the conversion is turned on all the time it will receive a lot of testing. And LLVM is simpler and has some nice tools to help in bug hunting. > Already, we have people who understand frontends, and others who know > GIMPLE initimately, and still overs who focus on RTL generation. Is > adding two additional passes going to further fragment expertise? Tha algorithms are going to be more or less the same. The data structures are going to be different. There will be a need the learn a new API, but this is true for any proposal that involves changing the internal representation. > I understand Rafael's comment, as quoted here: > > > In a first stage nothing will be done with the LLVM representation > > except convert it back to GIMPLE. This will make sure that all > > necessary information (including debug) can pass through the LLVM. The > > conversion will also receive very good testing with this. > > Does this mean that the "LLVM pass" will initially invoked only via an > option, and that a normal compile will continue the current path until > LLVM is fully tested and accepted? I was hopping that this would be done only to have a fast track for adding LLVM. After that, the current path would be ported as fast as possible. Others have expressed concerns about needing a c++ compiler to bootstrap. It may be possible then to maintain an option of short cutting the GIMPLE -> LLVM -> GIMPLE conversion so that stage1 can be build with a C compiler. > Just questions; if they are stupid, please be gentle. ;) > > -- > Scott Robert Ladd <[EMAIL PROTECTED]> > Coyote Gulch Productions > http://www.coyotegulch.com > Rafael
updated llvm patch to the apple branch
The new apple branch appers to work in gnu/linux/x86. An update of Chirs' patch to the new version of the branch is available at gcc-llvm-apple-local-200502-branch-107672.patch.bz2. It compiles xgcc but this in turn fails to compile crtbegin.o: plus_expr 0xb7b89aa0 type sizes-gimplified public unsigned SI size unit size align 32 symtab 0 alias set -1 pointer_to_this > sizes-gimplified unsigned SI size unit size align 32 symtab 144732168 alias set -1> constant invariant arg 0 constant invariant arg 0 addressable asm_written used static asm-frame-size 0 SI file ../../apple-local -200502-branch/gcc/crtstuff.c line 195 size unit size user align 32 attributes initial LLVM: [1 x void ()*]* %__DTOR_LIST__>> arg 1 constant invariant 4>> cc1: ../../apple-local-200502-branch/gcc/llvm-convert.cpp:1989: static llvm::Constant* Tre eConstantToLLVM::Convert(tree_node*): Assertion `0 && "Unknown constant to convert!"' fail ed. Rafael
LLVM patch X function lowering
I found that the patch 99839:99840 introduces some function lowering and in doing so breaks the LLVM patch. A patch for trunk version 99839 is in http://www.las.ic.unicamp.br/~espindola/gcc-llvm-trunk-99839.patch.bz2. Chris, are you working on this or should I give it a try? Rafael
Re: LTO and version scripts
> What version linker? In particular, do you have the fix for PR12975? It seems to work with gold and the LLVM plugin. I have added a test to make sure it stays that way: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140804/229493.html Cheers, Rafael
Re: Machine Dependance
> Hey Paolo thanx a lot.I got the info I required. > Can u mention any links that i can use as a reference to understand the dump > output of -fdump-tree-original-raw as AST? read GENERIC and GIMPLE: A new tree representation for entire functions at http://zenii.linux.org.uk/~ajh/gcc/gccsummit-2003-proceedings.pdf > Thanx a lot. > > Darthrader Rafael
the builtin_function hook
There is a lot of duplicated code in the hooks implementation in the various front ends. I am trying to factor it a little bit. I started with the builtin_function hook. Doing a mostly mechanical work I came up with the attached patch. The patch doesn't touches the C++ front end because it is organised a little differently and would require more work. I don't want to apply this patch right now because I think that it can be made a little better. To do this I need some clarifications: *) Why does a front end needs to know when a new builtin is added? *) If the internal representation used is language independent, can we change the signature of builtin_function to receive just the decl node? The next thing I will try to do is wrap all calls to lang_hooks.builtin_function in a add_builtin_function (in which file should it be?) and move the common code into it. Any comments? Thanks, Rafael builtin.patch Description: Binary data
-fmudflap and -fmudflapth
The gcc documentation says: Use `-fmudflapth' instead of `-fmudflap' to compile and to link if your program is multi-threaded. but the mudflap gate is static bool gate_mudflap (void) { return flag_mudflap != 0; } so one must use -fmudflapth in addition to -fmudflap. Right? Should I fix the documentation? Thanks? Rafael
using libmudflap with non-instrumented shared libraries
Using libmudflap to test a program that uses libxml2, I found that if a program access a constant pointer in a non-instrumented library, mudflap thinks that a read violation has occurred. A simple test that illustrates this is: a.c: - char *p = "abc"; - b.c: #include extern char *p; int main() { char a = p[0]; printf("%c\n",a); return 0; } compile and link with gcc -shared -fPIC a.c -o liba.so gcc -fmudflap -lmudflap b.c -la -L. -o b When b is run, mudflap prints: *** mudflap violation 1 (check/read): time=1142875338.034838 ptr=0xb7e2a521 size=1 pc=0xb7e34317 location=`b.c:5 (main)' /usr/lib/libmudflap.so.0(__mf_check+0x37) [0xb7e34317] ./b(main+0x7a) [0x80487f2] /usr/lib/libmudflap.so.0(__wrap_main+0x176) [0xb7e34ed6] number of nearby objects: 0 - Given how mudflap works, it would be very hard to avoid this false positive. It would be nice if this limitation was documented. Thanks, Rafael
Re: using libmudflap with non-instrumented shared libraries
> Did the compiler give you a warning about inability to track the > lifetime of "p"? It should have. No. Not even with -Wall -O2. gcc -v: gcc (GCC) 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9) > - FChE > Thanks, Rafael
Re: Front end best practice in GCC 4.1.0?
On 3/29/06, Dustin Laurence <[EMAIL PROTECTED]> wrote: > I'm fiddling around with a GCC 4 front-end tutorial that would be more > detailed and hands-on than anything I've found so far on the web. It's > a bit like the blind leading the blind, but it makes me learn better and > while I'm learning it I don't mind writing it up, but after I learn it > I've got better things to do. Two friends and I have started to write a toy scheme front end. As a sub product, we have create a hello world front end and a small tutorial. You may find them at http://svn.gna.org/viewcvs/gsc/branches/hello-world/. Depending on what you want, maybe you can write some patches instead of a full tutorial :-) > The question is, which front ends are regarded as being good exemplars > of style in GCC 4 and which are burdened with legacy code that shouldn't > be duplicated in a new project? I gather that at one time the obvious > choice, treelang, wasn't all that pretty and the fortran front end was > suggested as better, but that was somewhat dated news. I think that fortran is a better option. Treelang has the parser mixed up with the rest of the front end. > All suggestions welcome. > > Dustin Best Regards, Rafael
Re: Front end best practice in GCC 4.1.0?
> You know, I've always wondered why there wasn't a lisp-family front end > for GCC, the roots of GNU and RMS being where they are (and didn't RMS > promise way back when to make lisp suitable for unix systems > programming?). I'm just not connected enough to the lisp world to know > the answer I guess. I also don't know. We chose scheme because it was the only language we knew of whose specification was small enough to read and still have some time left to code in a semester :-) > That's pretty much *exactly* what I had in mind--minimal front end that > produces a useless, do-nothing, but "correct" front end. Even the > motivation is the same, since as you might expect this was a preface to > a language project. :-) Nice. If you notice, the documentation consists of a series of steps. In the first one a front end that does nothing is created. In the second step an empty main is generated. Unfortunately, even a hello world front end is a bit hard to explain :-( > I'd be happy to cooperate as much as possible. The goal was to write > the document I needed myself and didn't find, and in so doing learn the > knowledge I'd have gotten from it. It wasn't to write for writing's > sake. If you've done everything I wanted, so much the better. :-) > > That said, I haven't actually looked at your docs in detail. It took me > a while to install svn, figure out how to do a checkout and what URL to > use, find the system docbook stylesheets, and put their path in the > makefile (not too bad for someone who has never seen docbook or used XML > before, I thought), and learn enough about docbook to find an > alternative to fop (I am not eager to install it because I'm trying to > keep this machine not-too-dependent on a jvm. :-) ) I use the html output mostly. Fop generates a very bad pdf IMHO, but I was able to run it (without support for images) with kaffe. > A quick look suggests that what you wrote is very like what I wanted to > end up with. The main difference is that I suspect I was going for a > bit more exhaustive detail--whether that's a good or bad thing is a > separate issue. Having a more details is a good thing I think, but remember that the details are likely to be the first thing that will become out of date. > Were you thinking hosting the html version somewhere eventually? We have written it in docbook to be able to eventually send it to TLDP. Currently the two co-authors have requested the copyright paper to submit the front end itself to the gcc trunk. After that we might try to update the tutorial a little bit and send it to the TLDP. > Thanks, that's the sort of comments I was hoping for. > > Dustin My pleasure, once we discovered how complicated the front end interface was, we were sure that someone else could benefit from the our experience :-) Rafael
[patch] duplicate convert_and_check in the c++ front end
The attached patch duplicates convert_and_check into the c++ front end. I would like to include it to be able to remove the convert callback latter on. Duplicating convert_and_check makes it is easy to use cxx_convert in the c++ front end and c_convert in the c front end. Thanks, Rafael 2006-04-03 Rafael Ávila de Espíndola <[EMAIL PROTECTED]> * gcc/cp/call.c (constant_fits_type_p): New function (cxx_convert_and_check): New function (convert_like_real): Use cxx_convert_and_check instead of convert_and_check Index: gcc/cp/call.c === --- gcc/cp/call.c (revision 112530) +++ gcc/cp/call.c (working copy) @@ -4141,7 +4141,57 @@ return expr; } +/* Nonzero if constant C has a value that is permissible + for type TYPE (an INTEGER_TYPE). */ +static int +constant_fits_type_p (tree c, tree type) +{ + if (TREE_CODE (c) == INTEGER_CST) +return int_fits_type_p (c, type); + + c = convert (type, c); + return !TREE_OVERFLOW (c); +} + +/* Convert EXPR to TYPE, warning about conversion problems with constants. + Invoke this function on every expression that is converted implicitly, + i.e. because of language rules and not because of an explicit cast. */ + +static tree +cxx_convert_and_check (tree type, tree expr) +{ + tree t = convert (type, expr); + if (TREE_CODE (t) == INTEGER_CST) +{ + if (TREE_OVERFLOW (t)) + { + TREE_OVERFLOW (t) = 0; + + /* Do not diagnose overflow in a constant expression merely + because a conversion overflowed. */ + TREE_CONSTANT_OVERFLOW (t) = TREE_CONSTANT_OVERFLOW (expr); + + /* No warning for converting 0x8000 to int. */ + if (!(TYPE_UNSIGNED (type) < TYPE_UNSIGNED (TREE_TYPE (expr)) + && TREE_CODE (TREE_TYPE (expr)) == INTEGER_TYPE + && TYPE_PRECISION (type) == TYPE_PRECISION (TREE_TYPE (expr + /* If EXPR fits in the unsigned version of TYPE, + don't warn unless pedantic. */ + if ((pedantic + || TYPE_UNSIGNED (type) + || !constant_fits_type_p (expr, + c_common_unsigned_type (type)) + ) + && skip_evaluation == 0) + warning (0, "overflow in implicit constant conversion"); + } + else + unsigned_conversion_warning (t, expr); +} + return t; +} + /* Perform the conversions in CONVS on the expression EXPR. FN and ARGNUM are used for diagnostics. ARGNUM is zero based, -1 indicates the `this' argument of a method. INNER is nonzero when @@ -4413,7 +4463,7 @@ } if (issue_conversion_warnings) -expr = convert_and_check (totype, expr); +expr = cxx_convert_and_check (totype, expr); else expr = convert (totype, expr);
[PATCH] Install drivers from gcc/Makefile.in
The attached patch moves the basic installation of the compiler drivers from gcc/*/Make-lang.in to gcc/Makefile.in. The Make-lang.in has only to inform the driver's name. Additional setup like setting the c++ -> g++ links remains in the Make-lang.in Ok for trunk when stage1 starts? :ADDPATCH build: Best Regards, Rafael 2006-07-13 Rafael Ávila de Espíndola <[EMAIL PROTECTED]> * gcc/java/Make-lang.in (DRIVERS): New (java.install-common): Don't Install the driver * gcc/cp/Make-lang.in (DRIVERS): New (c++.install-common): Don't install the driver * gcc/fortran/Make-lang.in (DRIVERS): New (fortran.install-common): Don't install the driver * gcc/treelang/Make-lang.in (DRIVERS): New (treelang.install.common.done): Don't install the driver * gcc/Makefile.in (DRIVERS): New (LANG_INSTALL_COMMONS): New (install-drivers): New Index: gcc/java/Make-lang.in === --- gcc/java/Make-lang.in (revision 115296) +++ gcc/java/Make-lang.in (working copy) @@ -40,6 +40,8 @@ # - the compiler proper (eg: jc1) # - define the names for selecting the language in LANGUAGES. +DRIVERS += gcj + # Actual names to use when installing a native compiler. JAVA_INSTALL_NAME := $(shell echo gcj|sed '$(program_transform_name)') JAVA_TARGET_INSTALL_NAME := $(target_noncanonical)-$(shell echo gcj|sed '$(program_transform_name)') @@ -199,19 +201,7 @@ check-java : # portable makefiles for both cross builds (where gcjh *must* # be explicitly prefixed) and native builds. java.install-common: installdirs - -if [ -f $(GCJ)$(exeext) ]; then \ - rm -f $(DESTDIR)$(bindir)/$(JAVA_INSTALL_NAME)$(exeext); \ - $(INSTALL_PROGRAM) $(GCJ)$(exeext) $(DESTDIR)$(bindir)/$(JAVA_INSTALL_NAME)$(exeext); \ - chmod a+x $(DESTDIR)$(bindir)/$(JAVA_INSTALL_NAME)$(exeext); \ - if [ -f $(GCJ)-cross$(exeext) ]; then \ - true; \ - else \ - rm -f $(DESTDIR)$(bindir)/$(JAVA_TARGET_INSTALL_NAME)$(exeext); \ - ( cd $(DESTDIR)$(bindir) && \ - $(LN) $(JAVA_INSTALL_NAME)$(exeext) $(JAVA_TARGET_INSTALL_NAME)$(exeext) ); \ - fi ; \ - fi ; \ -for tool in $(JAVA_TARGET_INDEPENDENT_BIN_TOOLS); do \ + for tool in $(JAVA_TARGET_INDEPENDENT_BIN_TOOLS); do \ tool_transformed_name=`echo $$tool|sed '$(program_transform_name)'`; \ if [ -f $$tool$(exeext) ]; then \ rm -f $(DESTDIR)$(bindir)/$$tool_transformed_name$(exeext); \ Index: gcc/cp/Make-lang.in === --- gcc/cp/Make-lang.in (revision 115296) +++ gcc/cp/Make-lang.in (working copy) @@ -37,6 +37,8 @@ # - the compiler proper (eg: cc1plus) # - define the names for selecting the language in LANGUAGES. +DRIVERS += g++ + # Actual names to use when installing a native compiler. CXX_INSTALL_NAME := $(shell echo c++|sed '$(program_transform_name)') GXX_INSTALL_NAME := $(shell echo g++|sed '$(program_transform_name)') @@ -146,25 +148,17 @@ lang_checks += check-g++ # Install the driver program as $(target)-g++ # and also as either g++ (if native) or $(tooldir)/bin/g++. c++.install-common: installdirs - -rm -f $(DESTDIR)$(bindir)/$(GXX_INSTALL_NAME)$(exeext) - -$(INSTALL_PROGRAM) g++$(exeext) $(DESTDIR)$(bindir)/$(GXX_INSTALL_NAME)$(exeext) - -chmod a+x $(DESTDIR)$(bindir)/$(GXX_INSTALL_NAME)$(exeext) -rm -f $(DESTDIR)$(bindir)/$(CXX_INSTALL_NAME)$(exeext) -( cd $(DESTDIR)$(bindir) && \ $(LN) $(GXX_INSTALL_NAME)$(exeext) $(CXX_INSTALL_NAME)$(exeext) ) -if [ -f cc1plus$(exeext) ] ; then \ if [ -f g++-cross$(exeext) ] ; then \ if [ -d $(DESTDIR)$(gcc_tooldir)/bin/. ] ; then \ - rm -f $(DESTDIR)$(gcc_tooldir)/bin/g++$(exeext); \ - $(INSTALL_PROGRAM) g++-cross$(exeext) $(DESTDIR)$(gcc_tooldir)/bin/g++$(exeext); \ rm -f $(DESTDIR)$(gcc_tooldir)/bin/c++$(exeext); \ ( cd $(DESTDIR)$(gcc_tooldir)/bin && \ $(LN) g++$(exeext) c++$(exeext) ); \ else true; fi; \ else \ - rm -f $(DESTDIR)$(bindir)/$(GXX_TARGET_INSTALL_NAME)$(exeext); \ - ( cd $(DESTDIR)$(bindir) && \ - $(LN) $(GXX_INSTALL_NAME)$(exeext) $(GXX_TARGET_INSTALL_NAME)$(exeext) ); \ rm -f $(DESTDIR)$(bindir)/$(CXX_TARGET_INSTALL_NAME)$(exeext); \ ( cd $(DESTDIR)$(bindir) && \ $(LN) $(CXX_INSTALL_NAME)$(exeext) $(CXX_TARGET_INSTALL_NAME)$(exeext) ); \ Index: gcc/fortran/Make-lang.in === --- gcc/fortran/Make-lang.in (revision 115296) +++ gcc/fortran/Make-lang.in (working copy) @@ -40,6 +40,8 @@ # - define the names for selecting the language in LANGUAGES. # $(srcdir) must be set to the gcc/ source directory (*not* gcc/fortran/). +DRIVERS += gfortran + # Actual name to use when installing a native compiler. GFORTRAN_INSTALL_NAME := $(shell echo gfortran|sed '$(program_transform_name)') GFORTRAN_TARGET_INSTALL_NAME := $(target_noncanonical)-$(shell echo gfortran
[lto] factor code common to all builtin_function
I have a patch that factors code common to all builtin_function implementations. It is approved for trunk when we get to stage1. Are the developers involved in the lto branch interested in this patch? If so, I can port it. Best Regards, Rafael
Re: LTO and Code Compaction \ Reverse Inlining \ Procedure Abstraction?
Some people call this "uninlining". I've also heard the term "procedural abstraction". The generalization is to identify common code fragments that can be turned into functions. Then, replace the users of the common code with function calls. Is this the same as Code Factoring? http://gcc.gnu.org/projects/cfo.html Best Regards, Rafael
Re: [PATCH] Install drivers from gcc/Makefile.in
What about Ada ? Will things still work after your change ? It would seem cleaner (if not mandatory) to take all languages into account in your change. Only now I realize that I answered in private :-( The install-common target works as before, so the ada front end is not affected. The proposed patch factor codes that install programs that have both a short name (g++) and a name with an architecture prefix (i686-pc-linux-gnu-g++). I couldn't find any such program in the ada front end, so it is not affected by this patch. Thanks in advance. Best Regards, Rafael
why the difference of two global pointers is not a constant?
I am trying to build a table with offsets of global pointers from a given pointer: void *fs[] = {f1 - f1, f2 - f1}; where f1 and f2 are functions. GCC is able to figure out that (f1 - f1) is 0, but says "initializer element is not constant" when trying to compute (f2 - f1). It is possible to solve the problem by declaring fs with an inline asm: asm("fs:\n" ".quad f1 - f1\n" ".quad f2 - f1\n"); Is there a way to do it with C? It works in visual studio. Sorry if this isn't the appropriate mailing list, but it looks like this is an intentional design decision. Thanks, Rafael
Re: why the difference of two global pointers is not a constant?
because that is what the language standard says. In general, the difference between two global pointers is something known only to the linker -- too late to evaluate as constant expression. In the particular case of two static functions or two static global pointers, it is possible for the compiler to compute it. Isn't it? I think that the linker will reorder the sections, but not the functions inside a section. The reason why I want to do this is to avoid relocations when a dso is loaded. Using the difference of two pointer I could build a constant table and add one of the pointers at runtime. Do you think that a GCC extension to allow the computation of the difference of two pointers in the same section is a good thing? Do you think that it is hard to implement? This extension would probally be broken by the LTO work, since then only the linker would know the size and addresses of the functions. Maybe the best solution is to add two GCC buildtins. Something like intptr_t __builtin_obj_offset(intptr_t) intptr_t __builtin_obj_addr(intptr_t) The first one would (in compile time) return the offset of a given addres from an arbitrary point in the object file. This would need to be supported in the assembler and linker. The second one would (in run time) add the arbitrary point to produce once more a valid pointer. This extension can also be implemented with LTO. -- Gaby Thanks for the comments, Rafael
Re: why the difference of two global pointers is not a constant?
This isn't possible with global symbols in a DSO because some other DSO (or indeed the exe) might also define one of the symbols. Not with hidden symbols. Sorry, I forgot to mention that. Andrew. Rafael
Re: why the difference of two global pointers is not a constant?
Use prelinking. That works for all relocation types, and doesn't require additional coding. It helps, but I still have 10639 relocations during a "import gtk" in python. The relative table also has the advantage that it is constant and can be shared. Ian Rafael
Re: algol 60 for gcc
On 8/8/06, Petr Machata <[EMAIL PROTECTED]> wrote: Hi list! I picked a diploma thesis assignment to implement a gcc frontend, and document the process thoroughly. I chose Algol 60 as the language to implement. There has already been one attempt to do Algol 60 frontend, but it probably died: http://gcc.gnu.org/ml/gcc/2002-05/msg00432.html From my investigation so far, it seems that there is not that much to document, apart from "read this buch of links and you're basically done; the rest changes so rapidly it doesn't make sense to document". But the assignment is in place, so I will do my best to create something worth reading. In particular, I was looking at GCC documentation assignments: http://gcc.gnu.org/projects/documentation.html and I think my work could be useful for "fully document the interface of front ends to GCC". For the very first steps you may find the GCC hello world howto useful (http://svn.gna.org/viewcvs/gsc/branches/hello-world/) I'm trying to make the university to GPL the code and documentation, and give up their copyright, so that it could be used without restriction, but won't know the outcome until later this year. I believe that the code must be GPL. gcc-algol project site is here, for those interested: http://projects.almad.net/gcc-algol Thanks, PM Thanks for choosing GCC :-) Rafael
Re: TTCN-3 frontend
On 8/21/06, Cosmin Rentea <[EMAIL PROTECTED]> wrote: Hi, I would like to ask whose approval is needed to start developing a GCC front-end for the abstract test language TTCN-3. You don't need an approval for that. See the COPYING file distributed with GCC for the details. More information about the language can be found here: http://www.ttcn-3.org/ Thanks, Cosmin Best Regards, Rafael
Re: Rebuild C code from GCC intermediate format
I wish to know if there exists any plugin that translate these intermediate format into C sources. I intent to modify these intermediate formats and retranslate then into C source to anylise the trasnsformations. For reading a C like representation you can use -fdump-tree-*. It might have the information that you need. For dumping real C you can use LLVM, but you will be able to dump only pre-optimized Gimple or the result of using LLVM optimizations on top of that. Rafael
Re: Fwd: LLVM collaboration?
On 11 February 2014 12:28, Renato Golin wrote: > Now copying Rafael, which can give us some more insight on the LLVM LTO side. Thanks. > On 11 February 2014 09:55, Renato Golin wrote: >> Hi Jan, >> >> I think this is a very good example where we could all collaborate >> (including binutils). It is. Both LTO models (LLVM and GCC) were considered form the start of the API design and I think we got a better plugin model as a result. >> If I got it right, LTO today: >> >> - needs the drivers to explicitly declare the plugin >> - needs the library available somewhere True. >> - may have to change the library loading semantics (via LD_PRELOAD) That depends on the library being loaded. RPATH works just fine too. >> Since both toolchains do the magic, binutils has no incentive to >> create any automatic detection of objects. It is mostly a historical decision. At the time the design was for the plugin to be matched to the compiler, and so the compiler could pass that information down to the linker. > The trouble however is that one needs to pass explicit --plugin argument > specifying the particular plugin to load and so GCC ships with its own > wrappers > (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar > thing. These wrappers should not be necessary. While the linker currently requires a command line option, bfd has support for searching for a plugin. It will search /lib/bfd-plugin. See for example the instructions at http://llvm.org/docs/GoldPlugin.html. This was done because ar and nm are not normally bound to any compiler. Had we realized this issue earlier we would probably have supported searching for plugins in the linker too. So it seems that what you want could be done by * having bfd-ld and gold search bfd-plugins (maybe rename the directory?) * support loading multiple plugins, and asking each to see if it supports a given file. That ways we could LTO when having a part GCC and part LLVM build. * maybe be smart about version and load new ones first? (libLLVM-3.4 before libLLVM-3.3 for example). Probably the first one should always be the one given in the command line. For OS X the situation is a bit different. There instead of a plugin the linker loads a library: libLTO.dylib. When doing LTO with a newer llvm, one needs to set DYLD_LIBRARY_PATH. I think I proposed setting that from clang some time ago, but I don't remember the outcome. In theory GCC could implement a libLTO.dylib and set DYLD_LIBRARY_PATH. The gold/bfd plugin that LLVM uses is basically a API mapping the other way, so the job would be inverting it. The LTO model ld64 is a bit more strict about knowing all symbol definitions and uses (including inline asm), so there would be work to be done to cover that, but the simple cases shouldn't be too hard. Cheers, Rafael
Re: Fwd: LLVM collaboration?
> My reading of bfd/plugin.c is that it basically walks the directory and looks > for first plugin that returns OK for onload. (that is always the case for > GCC/LLVM plugins). So if I instlal GCC and llvm plugin there it will > depend who will end up being first and only that plugin will be used. > > We need multiple plugin support as suggested by the directory name ;) > > Also it sems that currently plugin is not used if file is ELF for ar/nm/ranlib > (as mentioned by Markus) and also GNU-ld seems to choke on LLVM object files > even if it has plugin. > > This probably needs ot be sanitized. CCing Hal Finkel. He got this to work some time ago. Not sure if he ever ported the patches to bfd trunk. >> For OS X the situation is a bit different. There instead of a plugin >> the linker loads a library: libLTO.dylib. When doing LTO with a newer >> llvm, one needs to set DYLD_LIBRARY_PATH. I think I proposed setting >> that from clang some time ago, but I don't remember the outcome. >> >> In theory GCC could implement a libLTO.dylib and set >> DYLD_LIBRARY_PATH. The gold/bfd plugin that LLVM uses is basically a >> API mapping the other way, so the job would be inverting it. The LTO >> model ld64 is a bit more strict about knowing all symbol definitions >> and uses (including inline asm), so there would be work to be done to >> cover that, but the simple cases shouldn't be too hard. > > I would not care that much about symbols in asm definitions to start with. > Even if we will force users to non-LTO those object files, it would be an > improvement over what we have now. > > One problem is that we need a volunteer to implement the reverse glue > (libLTO->plugin API), since I do not have an OS X box (well, have an old G5, > but even that is quite far from me right now) > > Why complete symbol tables are required? Can't ld64 be changed to ignore > unresolved symbols in the first stage just like gold/gnu-ld does? I am not sure about this. My *guess* is that it does dead stripping computation before asking libLTO for the object file. I noticed the issue while trying to LTO firefox some time ago. Cheers, Rafael
Re: Fwd: LLVM collaboration?
> What about instead of our current odd way of identifying LTO objects > simply add a special ELF note telling the linker the plugin to use? > > .note._linker_plugin '/./libltoplugin.so' > > that way the linker should try 1) loading that plugin, 2) register the > specific object with that plugin. > > If a full path is undesired (depends on install setup) then specifying > the plugin SONAME might also work (we'd of course need to bump > our plugins SONAME for each release to allow parallel install > of multiple versions or make the plugin contain all the > dispatch-to-different-GCC-version-lto-wrapper code). Might be an interesting addition to what we have, but keep in mind that LLVM uses thin non-ELF files. It is also able to load IR from previous versions, so for LLVM at least, using the newest plugin is probably the best default. > Richard. Cheers, Rafael
Re: [patch][rfc] How to handle static constructors on linux
ccing the gcc list and Cary Coutant. The issue comes from gcc pr46770. Cary, have you tried implementing the --reverse-init-array option? Does it solve the problems you were seeing? Can libstdc++ be fixed to work with the iostream static constructors being in .ctor or .init_array? Would you be interested in such a fix? From llvm's point of view the main concern is if this ABI change should be considered a hiccup to be worked around in systems like fedora 17 or it is something more permanent. If I understand it correctly, gcc's configure checks if the target support .init_array and uses it unconditionally if it does, is that correct? On 17 June 2012 23:15, Rafael Espíndola wrote: > I recently upgraded to fedora 17 and now found out that libstdc++ 4.7 > requires c++ constructors to be handled with .init_array instead of > .ctors. I have not produced a self contained testcase, but when > building the google protocol buffer compiler (part of chromium) if the > static constructor for > > > namespace std __attribute__ ((__visibility__ ("default"))) > { > ... > static ios_base::Init __ioinit; > > } > . > > ends up .ctors, the compiler crashes during startup. Patching the > clang produced assembly to use .init_array fixes the problem. > > This is unfortunate, as the semantics of both are not exactly the > same, since the runtime process them in reverse order. My first > intention was to ask for approval for the attached patch, but I was > told by Chandler that this ABI change caused enough problems for them > that they reverted it internally. It looks like there are now two ABIs > and we will have to support both. > > On the clang side, we can reuse the logic for detecting the gcc > installation to know which ABI to use, but it is not clear what the > best way to forward that to llvm is. Traditionally this is handled in > the triple, the eabi suffix for example, but fedora 16 and 17 share > the same triple (x86_64-redhat-linux) and have different ABIs. This is > also a problem for llc: It doesn't know anything about which gcc > installation is being used, if any, and the IL represents static > constructors at a higher level and it is codegen's job to select a > section. > > The best I could think of so far is > > * Add a command line option to codegen for using .init_array (defaults > to false). > * Have clang produce that option when it detects a gcc 4.7 based distro. > > This would mean that without any options llc would produce the wrong > result on fedora 17 (and other gcc 4.7 distros?), but hopefully cases > that depend on cross TU construction order are not common enough for > this to be a big problem for users. Clang would still produce the > correct results. > > Diego, do you know if it is the intention that the new ABI will be > maintained? If so, do you think all users will be able to upgrade to > it or will we (gcc and clang) have to keep both for the foreseeable > future? > > Cheers, > Rafael
Re: [patch][rfc] How to handle static constructors on linux
> The GNU linker has support to merge .ctor's into init_array. Does the > gold linker have the same feature? This seems more like the real fix > rather than just hacking around the issue. Recent version have it. I found the bug when using gold 2.21 which doesn't. What seems to happen is: * In an old linux system, the linker leaves ctors in .ctors, crtbeginS.o has a call to __do_global_dtors_aux. The net result is that _init calls the constructors. * In an all new linux system, the compiler uses .init_array (or the linker moves it there) and crtbeginS.o has nothing to do with constructors. * If we have a compiler that doesn't use .init_array and gold 2.21 in a new linux system, we hit the bug. So this is not as bad as I was expecting (old programs still work), but it is still a somewhat annoying ABI change to handle. I think we can add support for this in clang in 3 ways: 1) Require new linkers when using gcc 4.7 libraries. 2) Ship our own versions of crtbeginS.o (and similars). 3) Use .init_array when using gcc 4.7 libraries. I have most of option 3 implemented. Chandler, do you still think that this is a big enough ABI breakage that it should not be supported? > Thanks, > Andrew Pinski Cheers, Rafael
Re: [patch][rfc] How to handle static constructors on linux
> This has a long and complicated history. I tried to explain some of that here: > > http://gcc.gnu.org/ml/gcc-bugs/2010-12/msg01493.html > > I wasn't part of the GCC community at the time, but I think that > .ctors was originally used instead of .init or .init_array precisely > because the order of execution of .init/.init_array was backwards from > the desired order of execution for constructors (leaving aside the > fact that it was backwards from the desired order of execution for > *any* kind of initializer). Now that GCC has finally moved from .init > to .init_array, they're simply trying to consolidate on the One True > Initializer Mechanism. In doing so, it would be desirable to correct > that mistake we made so long ago in the gABI, but that's where we ran > up against the concerns of the Chrome and Firefox developers who care > more about startup performance than about constructor ordering (but, > apparently, not enough to use linker options to reorder the code in > order to get both good performance *and* proper execution order). Just a bit of context: I tried to build chrome in order to test a completely unrelated change (it is an awesome compiler test case) and got a protocol compiler crash, which is where all this started. I don't work with chrome or know why they are using gold 2.21. I do work on firefox and we are using centos 5's linker and gcc 4.5 :-( But I am still missing something, why is the performance so different? Code layout putting the constructors' body in the reverse order they are called? > -cary Cheers, Rafael
GCC and Clang produce undefined references to functions with vague linkage
Note1: I am not subscribed to the gcc list, please use reply-all. Note2: I think the clang list is moderated for the first post, but it is usually really fast. Sorry about that. I recently implemented an optimization in LLVM to hide symbols that the we "know" are available in every DSO that uses them. This is very similar to a more generic optimization that is possible during LTO when using recent versions of the gold plugin. Unfortunately, this found a bug in both gcc and clang (or in the itanium ABI, it is not very clear). The testcase is $ cat test.h struct foo { virtual ~foo(); }; struct bar : public foo { virtual void zed(); }; $ cat def.cpp #include "test.h" void bar::zed() { } $ cat undef.cpp #include "test.h" void f() { foo *x(new bar); delete x; } Both gcc (>= 4.6) and clang will produce and undefined reference to _ZN3barD0Ev when compiling undef.cpp with optimizations: $ ~/gcc/build/gcc/xgcc -B ~/gcc/build/gcc/ -c undef.cpp -o undef.o -O3 -fPIC $ nm undef.o | grep D0 U _ZN3barD0Ev And, when using LTO, a shared library built from def.cpp has a visible vtable, but the destructor itself is not visible: $ ~/gcc/build/gcc/xgcc -B ~/gcc/build/gcc/ -c def.cpp -o def.o -O3 -flto -fPIC $ ~/gcc/build/gcc/xgcc -B ~/gcc/build/gcc/ def.o -o def.so -shared -fuse-linker-plugin $ readelf -sDW def.so | grep bar 12 7: 0931 5 OBJECT WEAK DEFAULT 13 _ZTS3bar 11 10: 1ca040 OBJECT WEAK DEFAULT 24 _ZTV3bar 13 14: 1cd024 OBJECT WEAK DEFAULT 24 _ZTI3bar 8 15: 08b010 FUNCGLOBAL DEFAULT 11 _ZN3bar3zedEv And the final result is that we get an undefined reference when linking: $ g++ -shared -o foo.so undef.o def.so -Wl,-z,defs undef.o:undef.cpp:function f(): error: undefined reference to 'bar::~bar()' I can see two ways of solving this and would like for both clang and gcc to implement the same: * Make sure the destructor is emitted everywhere. That is, clang and gcc have a bug in producing an undefined reference to _ZN3barD0Ev. * Make it clear that the file exporting the vtable has to export the symbols used in it. That is, the Itanium c++ abi needs clarification and so does gcc's lto plugin (and the llvm patch I am working on). In an earlier discussion on the llvm list, it was pointed out that * The second solution would probably be the most backward compatible one, as gcc (even without lto) has been producing the undefined symbol since 4.6. * If implementing the second solution, the destructor still has to be a weak symbol on OS X at least. * The first solution has the advantage that it is semantically simpler as it avoids a special case about when a function can be assumed to be available externally. * The first solution has the advantage that more symbols can be marked hidden, as the gcc lto plugin does currently. Cheers, Rafael
Re: GCC and Clang produce undefined references to functions with vague linkage
Richi asked me to also report a gcc bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=53808 > But that's pervasively true in C++ — the linker has to eliminate duplicates > all the time. Idiomatic C++ code ends up plunking down hundreds, if > not thousands, of inline functions in every single translation unit. This is > already a massive burden for linking C++ programs, particularly in debug > builds. Adding a few extra symbols when the optimizer determines that > it can devirtualize, but declines to inline, is essentially irrelevant. > > In fact, it is particularly unimportant because it's very likely that this > duplicate > will be in the same DSO as the vtable. That means that the first solution > imposes some extra work on the static linker alone (but again, only when > devirtualizing a call to a function we don't want to inline!) while preserving > our ability to reduce work for the dynamic linker (since calls do not rely > on address equality of the function across translation units). The second > solution is an irrevocable guarantee that every symbol mentioned in > a strong vtable *must* be exported to the whole world. > > Also recall that these symbols can already be emitted in arbitrary > other translation units — we cannot change the ABI to say that these > symbols are *only* emitted in the file defining the v-table. I would just like to point out that the ABI could says that they only *need* to be emitted in the file with the vtable, but yes, for a long time it would have to support the symbols showing up in other translation units produce by older compilers. > Finally, both the language standard and the ABI are clearly designed > around an assumption that every translation unit that needs an inline > function will emit it. > > John. Cheers, Rafael
Re: GCC and Clang produce undefined references to functions with vague linkage
> There's no "for a long time" here. The ABI does not allow us to emit these > symbols with non-coalescing linkage. We're not going to break ABI > just because people didn't consider a particular code pattern when they > hacked in devirtualization through external v-tables. If we take "the ABI" to mean compatibility with released versions of the compilers, then it *is* broken, as released compilers assume behavior that is not guaranteed by the ABI (the document). It is not possible to avoid all incompatibilities. To avoid most we would have to * Say that the symbol must be exported by the file that exports the vtable. Not doing so causes incompatibilities with files compiled by gcc version 4.6 and 4.7, open source clang versions 2.9, 3.0, 3.1 and Apple's releases 318.0.58 and 421-10.48 (those are the ones I have on my laptop). * Say that the symbol must remain mergeable so that current files still work, which on OS X requires it to be weak. In other words, we have to take the worse of both options. The only thing that would still be broken (in released versions) are libraries built with gcc's LTO, but hopefully changing that is the option with the least impact on the users. > John. Cheers, Rafael
Re: GCC and Clang produce undefined references to functions with vague linkage
> not well-formed C++, for it violates the one-definition rule in that it > *lacks* a definition for the virtual member function foo::~foo(). Does > it make any difference if you add a definition? Unfortunately no. Replacing the declaration with an inline definition produces a copy of it in undef.o, but we still get an undefined reference to ~bar: nm undef.o | grep D0 U _ZN3barD0Ev W _ZN3fooD0Ev Cheers, Rafael
Re: GCC and Clang produce undefined references to functions with vague linkage
> Yes, this indeed looks like (most probably my) bug in the constant folding > code that now uses extern vtables. I will fix it. So we can not take > comdat linkage decl from external vtable when we no longer have its body > around, right? Sounds about the fix John was describing, yes. You can produce a copy of the comdat linkage decl (the destructor in this case) or avoid using the extern vtable and just produce an undefined reference to the vtable itself. > Honza Thanks! Rafael