Data Recovery Services and Hardware Repair Services
We have made helping people our business. We only do data recovery, we do it all day, every day - we have become the very best in this field. We have seen all types of problems and solved nearly all of them! Why we are here? Data – Important assets of enterprises and individuals, is the key element of great success. Virus Attack, Back-up failure, Hard Disk Drive Crash, System Failure, Nature Disasters, Human Errors are one of the causes of Data Loss, which might potentially cause million and billions loss of profit. IntelliRecovery experts with their overwhelming experience, state-of-the-art tools and deep knowledge provide finest and innovative Data Recovery Solutions for all types of storage media and file systems in case of data loss. We are dedicated to provide you unmatchable, updated and quality data recovery services and we have enduring commitments to customer satisfaction. Nowadays, personal computer, mobile devices, networks, servers etc are so popular and they have become something we can’t miss in our daily life. This kind of digital platform can drive efficiency and convenience, unfortunately there is a huge risk behind – Data loss. Hence, IntelliRecovery team are here to recover and salvage lost data from disaster stricken hard drives, computer workstations, laptops, e-mails, smartphones, PDA's, iPods and portable storage media of every brand, manufacturer, model and capacity. What can we do? Hard disk data recovery RAID array data recovery Emergency data recovery USB flash drive data recovery Memory card data recovery Mobile phone data recovery Apple iPhone Passcode/PIN Removal Mobile Phone Spyware & Virus Detection NAS Data Recovery Data destruction Hard disk duplication Hard disk password removal We believe in doing business in an honest, friendly and professional way. We are always up front, honest and provide realistic time estimates, quotes and believe in providing the best service possible. No data recovery company can guarantee that they can recover your data (e.g if your disk platter is scratched or data is overwritten) but we promise not to give up until we recover your data or completely exhaust all possible options to help you! If your data cannot be recovered for any reason, we'll give you an honest explanation of the reason and assure you that we have done everything in our power to try to help you. In over 90% of cases, we can recover data from hard disks and other data storage devices. We look forward to helping you! We also provide Mobile Phone Repair services for Iphone, Ipad, and we specialize in repairing cell phones, smart phones, computers, laptops, game consoles, cameras, mp3 players, and most any other electronic gadget you can throw at us. We strive to create solutions where there weren’t any before. Best regards, Lance Email: drdatab...@tom.com
Re: building gcc
Bob Rossi <[EMAIL PROTECTED]> writes: > Hopefully I'll be able to debug gcc nicely after this is built. Two > more questions that could save me a lot of time. Do you know where the > abstract syntax tree is stored in GCC after a file is parsed? I'm not sure what kind of answer you are looking for. One place to look is in the cgraph code. > Does GCC > still create an AST for C/C++, or does it go directly to GIMPLE? gcc generates an AST, of sorts, for C++. For C it goes directly to GIMPLE (really GENERIC, but they are pretty similar). Ian
Re: Implicit conversions between vectors
Mark Shinwell <[EMAIL PROTECTED]> writes: > Here, the compiler is not complaining about an attempt to implicitly > convert a vector of four 16-bit quantities to a vector of eight > 8-bit quantities. > > This lack of type safety is unsettling, and I wonder if it should be fixed > with a patch along the lines of the (not yet fully tested) one below. Does > that sound reasonable? It seems right to try to fix the generic code here, > even though the testcase in hand is target-specific. If this approach > is unreasonable, I guess some target-specific hooks will be needed. I personally think it is a good idea to require an explicit cast. This comes up with C++ overloaded functions as well. In the backend I wrote for a vector processor, I added a command line option. Paolo Bonzini <[EMAIL PROTECTED]> writes: > I'm almost sure this is a regression, and older versions of GCC > implemented exactly what you propose. It's been this way at least since 4.0. I believe that the problem with changing this unconditionally is that the Altivec programming guidelines specify the rules which gcc currently follows: you are permitted to assign one vector variable to another, without an explicit cast, when the vectors are the same size. So please check the Altivec programming manual. Ian
Re: Tree node questions
Brendon Costa <[EMAIL PROTECTED]> writes: > If I have a FUNCTION_DECL node that returns non-null for > DECL_TEMPLATE_INFO() then it is a template or template instantiation. > How can I tell if it is a full instantiation (I.e. not general > template or partial specialisation)? DECL_TEMPLATE_INSTANTIATION, I believe. > Does this also apply to nodes that represent template types > (structs/classes)? For them I believe you want CLASSTYPE_TEMPLATE_INSTANTIATION. > Also up to now I have been using my own "tree walker" as I seem to be > unable to get walk_tree to work as I need. I have come across a few > problems in my own tree walker and need to change it, but before I do > I thought I would see if there already exists something that works > similarly. > > Basically I would like to know the current "tree depth" and context > for the parent nodes in the tree as it is walked down. > > Is there any way of getting the current depth and node context using > walk_tree or a similar function? I don't know of any straightforward way, no. > One final question. Are there any functions and if so what types that > are skipped by gimplify_function_tree defined in gimplify.c? Every function that is going to be expanded into assembly code should go through gimplify_function_tree. gcc won't ordinarily bother to expand static functions which are not used. Ian
Re: Additional tree node questions.
Brendon Costa <[EMAIL PROTECTED]> writes: > For each FUNCTION_DECL node I find, I want to determine what its > exception specification list is. I.e. the throws() statement in its > prototype. Look at TYPE_RAISES_EXCEPTIONS (FNDECL). Ian
Re: Abt SIMD Emulation
Mohamed Shafi <[EMAIL PROTECTED]> writes: This question is more appropriate for the gcc-help mailing list than for the gcc mailing list. > For targets which doesn't have simd hardware support like fr30 , simd stuff > is emulated? Yes, if you use __attribute__ ((vector_size (NN))) for a target which does not support vector registers of that size, gcc will emulate the vector handling. > Is there some flags/macros in gcc to indicate that? To indicate what? > How is it done in other targets which deosnt have the hardware support? In the obvious tedious way: as a loop over the elements. Ian
Re: Wiki, documenting Tree SSA passes
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes: > On the Wiki, http://gcc.gnu.org/wiki/MiddleEnd I am trying to document very > briefly the passes on Gimple SSA trees Thanks! > I cannot yet submit patches to it (file gcc/doc/passes.texi) yet, because my > copyright assignement form takes some time to be signed at CEA-LIST where I > am working on GCC (but I think it will be signed soon). Do I understand > correctly that no patch of more than 10 lines can be sent to gcc-patches@ > without the FSF having recieved the copyright transfer (or assignment) > paper? Yes. > Is documenting the passes on the Wiki worthwile? Yes. Ian
Re: Abt SIMD Emulation
Mohamed Shafi <[EMAIL PROTECTED]> writes: > I want to know what can be done in the back end of a target to indicate that > SIMD stuff should be emulated all the way. That should happen by default. > Is there any target macros or hooks available for that. > Will the target hook TARGET_VECTOR_MODE_SUPPORTED_P hep me to indicate that? You should only define that if the port is prepared to handle vector operations itself. You don't need to use it for emulation. Ian
Re: Dead include file: dwarf.h ?
Steven Bosscher <[EMAIL PROTECTED]> writes: > As far as I can tell, dwarf.h is not included anywhere in gcc/ > or any of its subdirectories. Is there any reason not to remove > this file? I think dwarf.h was orphaned when dwarfout.c was removed in 2003. I think we should remove it. Ian
Re: Issue with hard regs
[EMAIL PROTECTED] writes: > i'm in the process of coding a GCC backend for a certain RISC-like > architecture. > Its register architecture consists of an integer register file (32 regs) and > two > additional hard regs that should be programmer visible. Accesses to these hard > regs are also emitted related to certain RTL patterns (divmoddi4 and > udivmoddi4 > for which these two hard regs should be written). Split 64-bit moves may or > may > not use these regs as well. Sounds like MIPS. > I thought i had added the necessary information (new register classes, > character > to match for the reg class) however i get the following error message: > > "unable to find a register to spill in class" Note that that error message is very general and can have a number of different underlying causes. > Can i disable filling/spilling for this register class? Sure: make the registers fixed. Or look at how the MIPS port handles HI and LO, with particular reference to mips_secondary_reload_class. Ian
Re: [PATCH] Relocated compiler should not look in $prefix.
"Carlos O'Donell" <[EMAIL PROTECTED]> writes: > A relocated compiler should not look in $prefix. I agree. I can't approve your patches, though. Ian
Re: Issue with hard regs
[EMAIL PROTECTED] writes: > I'll have a look to mips_secondary_reload_class, however now i don't get > spilling/filling errors. I have a question: libgcc2.c is pretty stable, right? > I mean i shouldn't looking for something going wrong with it. Right, libgcc2.c is quite stable. In a normal build it is the first complicated code to be compiled with the new compiler, so it is a normal place for reload problems to arise. Ian
Re: Why does GCC allow '$' character in variable and function names?
Sajish V <[EMAIL PROTECTED]> writes: > I think that C does not allow special characters ( '~' '!' '@' '#' '$' '%' > '^' '&' '*' ) in variable and function names. My knowledge is purely based on > the books that I have been reading to learn C. > > To verify this when I tried to compile a C program using GCC with '$' in > variable names and function names I found that GCC does not complain. No > errors or warnings. > > I used the basic command with no option to compile. > > bash$ gcc > > Can someone please let me know why GCC allows '$' character in variable and > function names? The C99 standard permits identifiers to contain implementation defined characters. See section 6.4.2.1. The standard does this to permit implementations to support identifiers written in languages which require characters which do not appear in English, such as 'ñ' or 'ö'. gcc permits '$' as an implementation defined character. It does this mainly for historical reasons. Ian
Re: C++ name mangling for local entities
[EMAIL PROTECTED] (Geoffrey Keating) writes: > For GCC, I've found it necessary to have a way to name local (that is, > namespace-scope 'static') variables and functions which allows more > than one such symbol to be present and have distinct mangled names. Out of curiousity: why start the name with "_Z" at all? If you don't start it with "_Z", then you don't have to worry about the standard name mangling. Ian
Re: Raw socket...
Basavaraj Hiremath <[EMAIL PROTECTED]> writes: > I am writting an example program, to create a raw > socket on Fedora Core Linx. > > When I create socket, my linux sustem crashes. I > need > to powerdown the system. I am running this program > as > super user to create raw socket. > > Following line crashes.. > rawSocket = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); Wrong mailing list. This mailing list is for gcc development. Please try a Linux programming list. Ian
Re: Question about LTO dwarf reader vs. artificial variables and formal arguments
Steven Bosscher <[EMAIL PROTECTED]> writes: > I haven't really been following the whole LTO thing much, but if I understand > correctly, the goal is to reconstruct information about declarations from > DWARF information that we write out for those declarations. If that's the > case, I wonder how LTO will handle artificial "variables" and formal argument > lists. I think it is a mistake to focus on DWARF details too much. We simply need some mechanism to write trees into an object file and to read them back in. That mechanism can be anything. We are using DWARF on the theory that it will be simpler because DWARF readers and writers already exist (I don't buy that argument myself, but, whatever). But it is clearly impossible to represent everything we need to represent in DWARF. So we need to extend DWARF as necessary to represent all the tree details. That is, we are not going to write out DWARF. We can't, because DWARF is not designed to represent all the details which the compiler needs to represent. What we are going to write out is a superset of DWARF. And in fact, if it helps, I think that we shouldn't hesitate to write out something which is similar to but incompatible with DWARF. In general reading and writing trees is far from the hardest part of the LTO effort. I think it is a mistake for us to get too tied up in the details of how to represent things in DWARF. (I also think that we could probably do better by defining our own bytecode language, one optimized for our purposes, but it's not an issue worth fighting over.) Ian
Re: Abt -fpic, -fPIC Option
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > I have built a cross-compiler for m68k-elf with GCC 4.1.1. > I need to know the difference in implementations of -fpic and -fPIC > for this particular target. -fpic uses a 16-bit offset when accessing the GOT. -fPIC uses a 32-bit offset. Thus -fpic may fail if you have more than 16K global variables. and -fPIC will fail on the 68000 (are there still 68000s out there?). On the 68000 only, -fPIC will use an lea/jmp sequence when calling a function through the PLT. -fpic will use a simple bra, which may fail if the program text segment is more than 64K. The 68020 and up always use bra.l, which of course is not available on the 68000. Ian
Re: fdump-tree explanation
"Dino Puller" <[EMAIL PROTECTED]> writes: > i want to make a statistic(i haven't found one) over linux source > code, and i want to know how many times expressions are simplified by > gcc. I don't think any of us know what you mean by "how many times expressions are simplified." Can you be more specific? Can you provide an example of what you want to look for, and an example of what you do not want to look for? Ian
Re: Abt RTL expression - Optimization
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > This small bit of code worked fine with all optimization except Os. > > unsigned int n = 30; > void x () > { > unsigned int h; > h = n <= 30; // Line 1 > if (h) >p = 1; > else >p = 0; > } > > when we tried to debug the emitted RTL instruction for Os, it was > found that RTL instruction for Line #1 (Compare and gtu) were not at > all emitted. i.e. there is no reference with respect to "h or n". For > the same optimization Os, if we declare the identifier "h" as global, > it generates the instructions properly. That small bit of code won't compile, since 'p' is undeclared. If 'p' is a global variable, then this certainly seems to be a bug. But you should be aware that -Os changes inlining behaviour, and different inlining may cause different behaviour here depending on how 'p' is used. > While checking the Dumps of RTL, the above mentioned code for "h, n" > was present in the file 20020611-1.c.25.cse2 but not in > 20020611-1.c.26.life1. > > 1. Is the file 20020611-1.c.25.cse2 input to life1 optimization pass? It is a dump of what the intermediate representation looks like after the cse2 pass. > 2. What does .life1 Life analysis pass do ? It does flow analysis: it records which pseudo-registers are live at all parts of the program. It also does a number of minor random cleanups. > 3. What are the probable causes for the elimination of RTL code's > (Compare & gtu) between the above mentioned passes? The probable cause is that 'p' appears to be unused, and the assignment to 'p' is dead. Ian
Re: Abt RTL expression - Optimization
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > > > This small bit of code worked fine with all optimization except Os. > > > > > > unsigned int n = 30; > > > void x () > > > { > > > unsigned int h; > > > h = n <= 30; // Line 1 > > > if (h) > > >p = 1; > > > else > > >p = 0; > > > } > > > > > > when we tried to debug the emitted RTL instruction for Os, it was > > > found that RTL instruction for Line #1 (Compare and gtu) were not at > > > all emitted. i.e. there is no reference with respect to "h or n". For > > > the same optimization Os, if we declare the identifier "h" as global, > > > it generates the instructions properly. > > > > That small bit of code won't compile, since 'p' is undeclared. If 'p' > > is a global variable, then this certainly seems to be a bug. But you > > should be aware that -Os changes inlining behaviour, and different > > inlining may cause different behaviour here depending on how 'p' is > > used. > > p is a global variable. But the problem is not with p, but with > codegeneration for h, n. Sure. But the most likely reason that the test of 'h' is being removed is that the compiler thinks that there is no need to store to 'p'. And once the test of 'h' is removed, there is no need to test 'n'. > In the RTL dump after cse2 pass, i do have the code for h, n. But in > the RTL dump after life analysis, that part of code is missing. > Is this output of cse2 pass used for life analysis? If I understand the question correctly, then the answer is yes. The compiler works by creating an intermediate representation (IR) and then manipulating it. The cse2 pass runs on the IR, then the life1 pass runs on the IR. The .cse2 file contains a dump of the IR after the cse2 pass is complete. > > > 3. What are the probable causes for the elimination of RTL code's > > > (Compare & gtu) between the above mentioned passes? > > > > The probable cause is that 'p' appears to be unused, and the > > assignment to 'p' is dead. > > > since p is a global variable, it can be used in other functions. Any > other causes? I can't think of any. > 4. If i disable gcse optimization pass, (i.e. -Os -fno-gcse), the code > is generated properly for h , n and the program works fine (it failed > for -Os). How does gcse affect code generation at .life1 pass. The gcse pass can change the IR significantly, and thus affects the life1 pass in many different ways which are difficult to characterize. You really have to look at what exactly is going on. ian
Re: submitting patches before copyright assignment fully done?
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes: > My copyright assignment is not yet signed, but I am pretty confident that it > will be signed (hopefully soon, and surely in 2006 ie before Christmas). > > Can I submit patches (to gcc-patches@) which are not trivial (ie more than > 10 lines of code), or should I wait till the copyriight assigment has been > recieved by the FSF (signed by my employing organisation and by FSF? The safest approach for all is to wait until the copyright assignment has been received. Thanks, and sorry for the inconvenience. Ian
Re: Abt RTL expression - Optimization
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > The problem due to which the below mentioned program was not working > is because of CODE HOISTING pass. I just masked the code hoisting step > and the program worked fine. At this point, if you want us to be able to give you useful suggestions, you need to show us a complete, standalone, test case. And remind us what target you are using. Without complete information we are just guessing. > 2. Any documentation on Code Hoisting Algorithm used by GCC 4.1.1? Only in the form of comments in the source code. Ian
Re: Register Usage - RTL Expression
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > 1. How does the life1 pass gets the register usage information from > the gcse pass? The GCSE pass does not generate any register usage informatoin. The life1 pass computes register usage by looking at the RTL. > 2. From which other passes and how, the information about registers > used can be determined by looking at the RTL dump of the corresponding > pass? You can always determine register usage by simply looking at the RTL and seeing which registers it uses. There are a couple of special cases relating to exception handling. > 3. Any documentation regarding the above? Not in any one place. Ian
Re: Abt gcses-1.c testcase
"Mohamed Shafi" <[EMAIL PROTECTED]> writes: > Can anybody tell me the purpose of the testcase > testsuite\gcc.dg\special\gcsec-1.c in the gcc testsuite ? > Is it something related with garbage clooection? > > What exactly doec this testcase test ? It's intended to test linker garbage collection of unused functions and variables. The idea is that the linker will remove the function "unused", since it is not used. See the linker documentation for the --gc-sections option. This test doesn't actually really test anything except that the options don't actually cause an error. A complete test would check that the function "unused" does not in fact appear in the output. As far as I can tell that is not actually checked. Ian
Re: How to deliver my source code???
"Marcelo Marchi" <[EMAIL PROTECTED]> writes: > I need some help from gcc gurus regarding delivering source code > I have a application that needs to be delivered to be compiled on my > customer side, BUT it cannot make changes or have any understanding > about code. > This deliver is strong necessary by many reasons, particularly by > changes that will be made in any cases with a tool. Then I need to > know if has some ANSI C compiler that compiles crypto C source files, > or any other solution about this. gcc has no such facility. Moreover, if gcc did have such a facility, it would be useless, because gcc is free software distributed under the GPL, which means that anybody is permitted to make changes to the compiler. So if gcc were able to read encrypted source code, it would be easy to modify gcc to simply code that code to some other file, thus defeating the encryption. In the fully general case, the kind of protection you are looking for is impossible to achieve. That said, you can probably achieve it with careful implementation in a proprietary compiler. But you can never do it with gcc. Ian
Re: build failure, GMP not available
"Kaveh R. GHAZI" <[EMAIL PROTECTED]> writes: > On Mon, 30 Oct 2006, Geoffrey Keating wrote: > > > 5. Are you aware that the GMP home page says > > > > Note that we chose not to work around all new GCC bugs in this > > release. Never forget to do make check after building the library > > to make likely it was not miscompiled! > > > > and therefore this library needs to be part of the bootstrap, not > > built separately? > > One more thing, I initially went down the road of including the GMP/MPFR > sources in the gcc tree and building them as part of the bootstrap > process. But the consensus was not to do that: > > http://gcc.gnu.org/ml/gcc/2006-10/msg00167.html I'm not sure I entirely agree with Mark's reasoning. It's true that we've always required a big set of tools to do development with gcc. And it's true that we require GNU make to be installed and working in order to build gcc. But this is the first time that we've ever required a non-standard library to be installed before J. Random User can build gcc. And plenty of people do try to build gcc themselves, as can be seen on gcc-help. If we are going to require these libraries, and we are not going to include them directly in the gcc sources, then I think it is incumbent on us to make the build procedure much much smarter. For example, if sufficiently new versions of the libraries can not be found, then perhaps the build procedure should actually download the versions from a known good site (e.g., gcc.gnu.org), unpack them, and build them. I don't care whether this is done at configure time or make time. If the code can not be downloaded, then I think the build procedure needs to give an extensive and clear error message which includes a description of where the sources can be found, how to build them, and how to run configure correctly. I think that if we stick with our current approach, we will have a lot of bug reports and dissatisfied users when gcc 4.3 is released. Ian
Re: build failure, GMP not available
Mark Mitchell <[EMAIL PROTECTED]> writes: > I would strongly oppose downloading stuff during the build > process. We're not in the apt-get business; we can leave that to the > GNU/Linux distributions, the Cygwin distributors, etc. If you want to > build a KDE application, you have to first build/download the KDE > libraries; why should GCC be different? Because gcc is the first step in bringing up a new system. Having complex sets of dependencies makes people's lives harder. I'm sure we've all had the unpleasant experience of trying to build something from the net only to discover that we had to build five other things first. It sucks. If we were requiring libraries which were normally installed, I wouldn't mind so much. But I had to download and build my own copy of MPFR for Fedora Core 5, which is a recent and popular distribution (GMP, at least, was already installed). > > I think that if we stick with our current approach, we will have a lot > > of bug reports and dissatisfied users when gcc 4.3 is released. > > I'd argue that the minority of people who are building from source > should not be our primary concern. Obviously, all other things being > equal, we should try to make that easy -- but if we can deliver a > better compiler (as Kaveh has already shown we can with his patch > series), then we should prioritize that. For those that want to build > from source, we should provide good documentation, and clear > instructions as to where to find what they need, but we should assume > they can follow complicated instructions -- since the process is > already complicated. I disagree: the process of building gcc from a release (as opposed to building the development version of gcc) really isn't complicated. The only remotely non-standard thing that is required is GNU make. Given that, all you need to do is "SRCDIR/configure; make". Admittedly, people often get even that simple instruction wrong. But it is easy to explain what to do. I'm certainly not saying that we should pull out GMP and MPFR. But I am saying that we need to do much much better about making it easy for people to build gcc. Sure, the people who build gcc aren't our primary concern. But they are a major concern, if only because those are the people who are our future developers. Ian
Re: [ANNOUNCE] GlobalGCC [GGCC] project (within ITEA programme)
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes: > The GGCC (ITEA) project aims to extend the free GNU Compiler Collection > by globally processing several compilation units (e.g. work on a whole > program or on a library) in order to customize and configure GCC to > European software industry needs : for performance level or for better > diagnosis. > > The GGCC (ITEA) contribution will be GPL licenced free software, and will > be proposed to the FSF. The GGCC (ITEA) consortium is determined to work > in close cooperation with the GCC community and the FSF. I'm glad to hear about this effort. I want to make sure that you are aware of the ongoing LTO effort to implement whole program optimization. There is some more information available here: http://gcc.gnu.org/wiki/LinkTimeOptimization If you want to get your changes into mainline gcc, then it would be best if you were able to work within this framework. Thanks! Ian
Re: Even stricter implicit conversions between vectors
Mark Shinwell <[EMAIL PROTECTED]> writes: > I would now like to propose that the check in that function be made > even stronger such that it disallows conversions between vectors > whose element types differ -- even if an implicit conversion exists > between those element types. As far as I can see, what that amounts to is saying that there should not be an implicit conversion between vector of int and vector of unsigned int. It seems to me that a cast would then be required to call, e.g., vec_add with arguments of type __vector unsigned int. I don't see how that is useful. But perhaps I am mistaken; can you give an example of the type of thing which is currently permitted which you think should be prohibited? Ian
Re: build failure, GMP not available
"Kaveh R. GHAZI" <[EMAIL PROTECTED]> writes: > Should that message refer to this: > ftp://gcc.gnu.org/pub/gcc/infrastructure/ > > or this: > ftp://ftp.gnu.org/gnu/gmp/ > http://www.mpfr.org/mpfr-current/ > > or this (perhaps with more details): > http://gcc.gnu.org/install/prerequisites.html The first, I think. > I prefer the latter one of avoid duplicating the info in more than one > place. If prerequisites needs more info, I'll fill in there better. I think the primary goal should be making it as simple and obvious as possible for people to build gcc. If that can be done without duplicating information, that is good. But the primary goal should be making it very very easy to build gcc. If we encounter the problem whose solution is "download mpfr from gcc.gnu.org and untar it," then I don't think it helps to point people to the long list at http://gcc.gnu.org/install/prerequisites.html, which is irrelevant for most people. Ian
Handling of extern inline in c99 mode
[ Moved from gcc-patches@ to [EMAIL PROTECTED] ] Andrew Pinski <[EMAIL PROTECTED]> writes: > On Tue, 2006-10-31 at 21:34 -0800, Geoffrey Keating wrote: > > Here's the list of log (and therefore ChangeLog) entries. There is > > one change that I haven't merged yet, Caroline's pubtypes changes; > > that seems to need some work, I'll leave it for Caroline. > > Well, the following problem on any glibc target is now broken with > -std=c99: > > #include > > int main(void) > { > printf("%d\n", 1); > return 0; > } We discussed this offline. The problem, as many people know, is that the tradition gcc definition of "extern inline" is incompatible with the C99 definition. The meanings are approximately reversed. Here is a review followed by a proposal. Review: In traditional gcc, "extern inline" means that the function should be inlined whereever it is used, and no function definition should be emitted. Moreover, it is permitted to provide both an "extern inline" and a normal definition, in which case the normal definition takes precedence. In traditional gcc, "inline" without "extern" or "static" means that the function should be compiled inline where the inline definition is seen, and the compiler should also emit a copy of the function body with an externally visible symbol, as though the declaration appeared without "inline". In C99, "extern inline" means that the function should be compiled inline where the inline definition is seen, and that the compiler should also emit a copy of the function body with an externally visible symbol. That is, C99 "extern inline" is equivalent to traditional gcc "inline". In C99, "inline" without "extern" or "static" means that the function should be compiled inline and that no externally visible function body should be emitted. That is, C99 "inline" is similar to traditional gcc "extern inline", although there is no equivalent ability to override the inline definition with a non-inline definition. glibc uses "extern inline", and exploits the traditional gcc ability to override the "extern inline" function with a regular function. For example, the definition of tolower in is: extern __inline int __NTH (tolower (int __c)) { return __c >= -128 && __c < 256 ? (*__ctype_tolower_loc ())[__c] : __c; } but the defintion in tolower.c is: #define __ctype_tolower \ ((int32_t *) _NL_CURRENT (LC_CTYPE, _NL_CTYPE_TOLOWER) + 128) int tolower (int c) { return c >= -128 && c < 256 ? __ctype_tolower[c] : c; } Note that these definitions are differently optimized for the different uses. End of review. The proposal is due to Zack, except for where I've butchered it. 1) Implement a function attribute which is the equivalent of "extern inline". Backport support for this attribute to all open branches. Try to persuade the glibc developers to use this feature instead of "extern inline" when it is available, by adjusting the handling of __USE_EXTERN_INLINES in features.h and elsewhere. 2) Add a new warning, turned on by default, for all uses of "extern inline". The warning should say that the meaning of "extern inline" is different in c99 and gnu99, and that the default behaviour will be changing. Backport this warning to all open branches. Note that warnings are not issued for system header files, so this warning will not trigger on glibc in ordinary usage. 3) In c99 and gn99 mode, treat bare "inline" as specified by the C99 standard (including the specified behaviour if "inline" and "extern" appear in separate declarations). In gnu89 mode, issue a warning for bare "inline", saying that the behaviour will change in gnu99 mode. The rationale here is that uses of bare "inline" in the gnu89 sense are less common than uses of "extern inline". codesearch.google.com finds about 6000 uses of "extern line" in code written in C, but the search inline -static -extern -# lang:c file:\.c$ finds only 100 occurrences (I excluded .h files here because any normal inline declaration in a .h file in traditional gcc should be "extern inline" or "static inline", so most of those .h files are probably actually C++). 4) In gcc 4.4, change c99 and gnu99 to use the C99 mandated definition of "extern inline". Keep the warnings in gnu89 mode. This leaves open the question of when we want to change the default to gnu99, but I think it gives us a workable transition plan to get there, at least on this troublesome issue. Comments? Ian
Re: Handling of extern inline in c99 mode
Mark Mitchell <[EMAIL PROTECTED]> writes: > Ian Lance Taylor wrote: > > > Here is a review followed by a proposal. > > How does this proposal handle the current problematic situation that > -std=c99 is broken on Linux? According to the proposal, we will restore the GNU handling for "extern inline" even when using -std=c99, which will fix the problem when using glibc. (Then we will break it again in gcc 4.4, but by then we can hope that glibc will be fixed, and we can force people who want to use -std=c99 or -std=gnu99 on GNU/Linux to upgrade to a newer glibc.) Ian
Re: Handling of extern inline in c99 mode
"Steven Bosscher" <[EMAIL PROTECTED]> writes: > On 11/1/06, Paolo Bonzini <[EMAIL PROTECTED]> wrote: > > > > > According to the proposal, we will restore the GNU handling for > > > "extern inline" even when using -std=c99, which will fix the problem > > > when using glibc. > > > > I am probably overlooking something, but if the only problematic system > > is glibc, maybe this can be fixed with a fixincludes hack? > > That would be a massive hack. Indeed. Moreover, glibc is not the only problematic system. gcc historically supported "extern inline" long before c99 existed, and there is plenty of existing code which uses gcc's definition. I believe that we need to give that code a decent chance to change before we switch over to c99. That is why I recommended adding warnings to all active branches, and postponing the changed behaviour of "extern inline" to gcc 4.4. Ian
Re: Handling of extern inline in c99 mode
Andrew Pinski <[EMAIL PROTECTED]> writes: > In the 4.3 timeframe, can we also add a flag to enable the correct behavior? > Yes the problem with this is that we have to support this flag for a long time > but the benifit is that we can change the default to the new way with just > flipping > a switch. Sure, that makes sense. > Also it would be nice to have an attribute or a new keyword to get > the old "extern inline" behavior, something like > __extern_but_inline? Or is there a real equavilant with C99 style > inling (I have not followed this part close enough to figure that > out). Yes, that was part of Zack's proposal--item number 1. There is no equivalent to gcc's "extern inline" in C99, nor is there any equivalent attribute. Ian
Re: Question about asm on functions/variables
Andrew Pinski <[EMAIL PROTECTED]> writes: > I noticed this with the recent C99 inline changes but it is unrelated to > the changes as 4.1 also has the same issue. With the following TU: > extern void f(void) __asm__("g"); > extern void g(void); > extern void f(void) {} > extern void g(void) {} > --- > We don't reject this TU during compiling but the assembler does. Is > this correct or should we actually reject this during compiling? Personally, I think it would be OK to reject this in the compiler but it doesn't seem all that important to me. If it is easy to detect this case in the compiler, then we could go ahead and do it there, but I suspect that it is not all that easy. Ian
Re: Even stricter implicit conversions between vectors
Mark Shinwell <[EMAIL PROTECTED]> writes: > Ian Ollmann wrote: > > stronger type checking seems like a good idea to me in general. > > I agree, but I don't really want to break lots of code all at once, > even if that code is being slightly more slack than it perhaps ought > to be :-) > > Given that no-one has really objected to stronger type-checking here > _per se_, then I see two ways forward: > > 1. Treat this as a regression: fix it and cause errors upon bad > conversions, but risk breaking code. > > 2. Emit a warning in cases of converting "vector signed int" to > "vector unsigned int", etc., and state that the behaviour will change > to an error in a later version. > > Thoughts? I would vote for: break the code, but provide an option to restore the old behaviour, and mention the option in the error message. Note that these sorts of conversions affect C++ overloaded functions, so in some cases people will not see the new error--they will see that some function call can not be made. My guess is that this is sufficiently unusual that we can get away with breaking it without a useful error message. Ian
Re: Abt RTL expression - Optimization
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > The relevant part of RTL dump of fgcse pass is given below: > > (insn 13 12 50 0 (set (reg:CC 21 cc) > (compare:CC (reg:SI 29 [ n ]) > (const_int 30 [0x1e]))) 68 {*cmpsi_internal} (nil) > (nil)) > > (insn 50 13 53 0 (parallel [ > (set (reg/f:SI 38) > (symbol_ref:SI ("p") )) > (clobber (reg:CC 21 cc)) > ]) 22 {movsi_long_const} (nil) > (nil)) > > (insn 53 50 14 0 (parallel [ > (set (reg/f:SI 39) > (symbol_ref:SI ("k") )) > (clobber (reg:CC 21 cc)) > ]) 22 {movsi_long_const} (nil) > (nil)) > > (jump_insn 14 53 16 0 (set (pc) > (if_then_else (gtu:CC (reg:CC 21 cc) > (const_int 0 [0x0])) > (label_ref 27) > (pc))) 41 {*branch_true} (nil) > (expr_list:REG_BR_PROB (const_int 5000 [0x1388]) > (nil))) > > 1. In Insn 13, the compiler uses the CC Reg instead of "h". Then the > compiler inserted to movsi_long_const inbetween the compare and the > if_then_else. BUT the movsi_long_const destroys the condition code > register CC .As the optimizer considers CC dead due to the > clobber(reg:CC 21 cc)) it removes the first pattern actually setting > CC. It gets deleted later. Thanks for finally giving the complete program and the RTL. I think you said earlier that this is a private target, not a standard gcc target. On that basis, I would say that the problem appears to be that you have a cc0 machine of a type which gcc does not handle naturally. If your comparison instructions set a fixed hard register, and simple register to register moves clobber that hard register, then you must handle comparison instructions specially before reload. You must emit a combined compare_and_branch instruction, which does the comparison and the branch in a single insn. Then you may write a define_split which runs after reload and splits the instruction back into its components for the second scheduling pass. You've encountered a somewhat subtle way in which gcc fails if you don't do this. There a much more obvious it will fail: reload will wind up clobbering the condition register every time it has to load or store a register. Writing the patterns to avoid using the fixed condition register before reload is tedious but typically not difficult. Writing the define_splits which run after reload is typically more complicated. Note that in general the define_splits are only useful if scheduling helps your processor. The C4X is an example of a standard target which has some of these issues. (In fairness I should say that there is an alternative, which is to use (cc0) for the condition register. But I do not recommend this, as support for (cc0) may be removed from the compiler at some point in the future.) Ian
Re: Volatile / TREE_THIS_VOLATILE question
Tobias Burnus <[EMAIL PROTECTED]> writes: > I use in my patch: > + if (sym->attr.volatile_) > + TREE_THIS_VOLATILE (decl) = 1; I think you will also want to give DECL a type which is volatile-qualified: build_qualified_type (original_type, TYPE_QUAL_VOLATILE) Ian
Re: Where is the splitting of MIPS %hi and %lo relocations handled?
David Daney <[EMAIL PROTECTED]> writes: > I am going to try to fix: > > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29721 > > Which is a problem where a %lo relocation gets separated from its > corresponding %hi. > > What is the mechanism that tries to prevent this from happening? And > where is it implemented? This implemented by having the assembler sort the relocations so that each %lo relocations follows the appropriate set of %hi relocations. It is implemented in gas/config/tc-mips.c in append_insn. Look for reloc_needs_lo_p and mips_frob_file. At first glance the assembler does appear to handle %got correctly, so I'm not sure why it is failing for you. Ian
Re: Abt long long support
"Mohamed Shafi" <[EMAIL PROTECTED]> writes: > Looking at a .md file of a backend it there a way to know whether a > target supports long long gcc always supports "long long" for all targets. Can you ask a more precise question? Ian
Re: Where is the splitting of MIPS %hi and %lo relocations handled?
David Daney <[EMAIL PROTECTED]> writes: > Ian Lance Taylor wrote: > > David Daney <[EMAIL PROTECTED]> writes: > > > >>I am going to try to fix: > >> > >>http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29721 > >> > >>Which is a problem where a %lo relocation gets separated from its > >>corresponding %hi. > >> > >>What is the mechanism that tries to prevent this from happening? And > >>where is it implemented? > > This implemented by having the assembler sort the relocations so that > > each %lo relocations follows the appropriate set of %hi relocations. > > It is implemented in gas/config/tc-mips.c in append_insn. Look for > > reloc_needs_lo_p and mips_frob_file. > > At first glance the assembler does appear to handle %got correctly, > > so > > I'm not sure why it is failing for you. > > Did you look at the assembly fragment in the PR? > > Is it correct in that there is a pair of %got/%lo in the middle of > another %got/%lo pair? Sure, why not? They can be disambiguated by looking at which symbol the %got/%lo applies to. That is just what the assembler reloc sorting implements in mips_frob_file. (Or, at least, is supposed to implement, though apparently something is going wrong in your case.) The assembler sorts the relocations so that the linker always see the %lo reloc immediately after the corresponding %got reloc(s). Ian
Re: Should GMP 4.1+ and MPFR 2.2+ be needed when we're not building gfortran?
Doug Gregor <[EMAIL PROTECTED]> writes: > The configure changes on the trunk require GMP 4.1+ and MPFR 2.2+. If > I understand things correctly, these libraries are only needed for > gfortran. Would it be possible to disable the checks for GMP and MPFR > when building with --enable-languages=[something not including > fortran] ? They are now required to build the generic middle-end. They are no longer only required for Fortran. Ian
Re: Encouraging indirect addressing mode
[EMAIL PROTECTED] writes: > Here I've used a macro to keep track of the farthest place reached in the > code. As you can see, I've even tried to set it up in such a way that it > will use a register to access the value. However, I don't get that result, > as I guess that is optimized out. Instead each comparison uses the full > address of the array, creating two more words for the read and for the > write. I'd prefer a sequence to read something like: > > movel #main_line, %a0 /* only once, at the start of the function */ > moveq #(LINE-1), %d0 > cmpl %a0@, %d0 > blt skip > moveb #LINE, %d0 > movel %d0,%a0@ > skip: ... > > I haven't seen any options that encourage more use of indirect addressing. > Are there any that I have missed? If not, I assume I will need to work > with the machine description. I've downloaded the gcc internals book, but > it's a lot of material and it's hard to figure out where to start. Can > anybody point me in the right direction? The first thing to try is to use TARGET_ADDRESS_COST to make the cost of register indirect smaller than the cost of an absolute address reference, at least when optimize_space is true. m68k.c doesn't seem to have a definition of TARGET_ADDRESS_COST, so you will have to add one. You should also take a look at TARGET_RTX_COSTS. Ian
Re: Abt long long support
"Mohamed Shafi" <[EMAIL PROTECTED]> writes: > So when i looked into the .md file i saw no patterns with DI machine > mode ,used for long long(am i right?), execpt > > define_insn "adddi3" and define_insn "subdi3" > > The .md file says that this is to prevent gcc from synthesising it, > though i didnt understand what that means. If there is no adddi3 pattern, then gcc will handle 64-bit addition by doing 32-bit additions, and similarly for 64-bit subtraction. Putting an adddi3 pattern in the MD file lets the backend specify how to do 64-bit addition. The backend can do it more efficiently than the generic approach if the target provides an add-with-carry instruction. I don't see how we can say anything else useful since we don't know anything about how your examples are failing to compile. Ian
Re: Abt RTL expression - combining instruction
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > I am trying to combine the compare and branch instruction. But my > instructions are not getting generated as my operands are not matched > properly. > > Previously for individual compare instructions, i had > operand 0 - Register operand > operand 1 - Non memory operand. > > For branch instruction, > operator 0 - compare operator > operand 1 - label. > > So when i combined compare and branch, i just superimposed both > statements with same conditions with > operand 0 - Register operand > operand 1 - Non memory operand > operator 2 - comparison operator > operand 3 - label. > > 1. Is it the right way to match operands and operators while combining > instruction? > 2. How to check where my instruction matching goes wrong? When writing an MD file you have to think about what you generate (insns/expanders with standard names) and what you recognize (insns). You have to always generate something which you can recognize. Anyhow, the easier way to generate a combined compare and branch instruction is to use an insn or expander with the cbranchMM4 standard name. See the documentation. Ian
Re: [m32c-elf] losing track of register lifetime in combine?
DJ Delorie <[EMAIL PROTECTED]> writes: > The r8c/m16c family cannot shift by more than 16 bits at a time ever, > or 8 bits at a time with constant shifts. So, to do a variable number > of shift on a 32 bit value, it needs to emit a conditional, turning > the attached example into this: > > i = 0xf; > if (j >= 16) > { > i >>= 8; > i >>= 8; > j -= 16; > } > ... > > Combine (rightfully) knows that i becomes the constant 0xf and > replaces the two constant shifts with it. However, it doesn't update > the life information. So, we have a basic block (#3 below) which has > register 28 live, but being assigned (i.e. it's really dead). GCC > notices this later, and dies. > > Ideas? The problem is presumably arising from the REG_EQUAL notes. Where are those being generated? Why does this case not arise for other targets? The only fully correct fix that I see in combine is for combine to notice that it is replacing the first use of a register which is live at the start of the block. In that case, it should set the appropriate bit in refresh_blocks. The place to do this would probably be somewhere around the calls do distribute_notes in try_combine. But I'm not sure that's really the right fix. Ian
Re: copy_from_user() crash...
Basavaraj Hiremath <[EMAIL PROTECTED]> writes: > My driver is crashing when I call copy_from_user() > call. Does any one have idea about this ? Wrong mailing list. Try a kernel programming list. This list is for compiler development. Ian
Re: Abt RTL expression - combining instruction
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > I have used cbranchmode4 instruction to generate combined compare and > branch instruction. > > (define_insn "cbranchmode4" > (set (pc) (if_then_else > (match_operator:CC 0 "comparison_operator" > [ (match_operand:SI 1 "register_operand" "r,r") > (match_operand:SI 2 "nonmemory_operand" "O,r")]) > (label_ref (match_operand 3 "" "")) > (pc)))] > This pattern matches if the code is of the form > > if ( h == 1) > p = 0; > > if the code is of the form > if (h), if (h >= 0) > p = 0; > > Then it matches the seperate compare and branch instructions and not > cbranch instruction. > > Can anyone point out where i am going wrong? If you have a cbranch insn, and you want that one to always be recognized, then why do you also have separate compare and branch insns? Ian
Re: Canonical type nodes, or, comptypes considered harmful
[EMAIL PROTECTED] (Richard Kenner) writes: > > My conclusion at the end was, the best speed up possible, isn't to > > mess with the callers, but rather, to get types canonicalized, then > > all the work that comptypes would normally do, hopefully, just all > > goes away. Though, in the long run those quadratic algorithms have to > > one day die, even if comptypes is fixed. > > My confusion here is how can you "canonicalize" types that are different > (meaning have different names) without messing up debug information. > If you have: > > Foo xyz; > > and you display xyz in the debugger, it needs to say it's type "Foo", not > some similar-but-differently-named type. > > Or maybe this is C++-specific and isn't relevant in C, so I'm not going to > understand it. The way to canonicalize them is to have all equivalent types point to a single canonical type for the equivalence set. The comparison is one memory dereference and one pointer comparison, not the current procedure of checking for structural equivalence. This assumes, of course, that we can build an equivalence set for types. I think that we need to make that work in the middle-end, and force the front-ends to conform. As someone else mentioned, there are horrific cases in C like a[] being compatible with both a[5] and a[10] but a[5] and a[10] not being compatible with each other, and similarly f() is compatible with f(int) and f(float) but the latter two are not compatible with each other. Fortunately C99 6.2.7 paragraph 2 says "all declarations that refer to the same object or function shall have compatible type; otherwise, the behavior is undefined." I think that is as close as C gets to the ODR, and I think it may give us the leeway we need to build an equivalence set for these awful cases. Ian
Re: Obtaining builtin function list.
Brendon Costa <[EMAIL PROTECTED]> writes: > How can I get a full list of all GCC C++ built-in functions that may be > used on a given platform or GCC build? > > For example, __cxa_begin_catch(), __cxa_end_catch(), __builtin_memset ... > > I am currently working with GCC 4.0.1 source base. Well, there is the documentation, e.g.: http://gcc.gnu.org/onlinedocs/gcc-4.0.3/gcc/Other-Builtins.html Or if you want to look at the source code, look at builtins.def and look for calls to add_builtin_function or lang_hooks.builtin_function in config/CPU/*.c. Well, that will tell you about functions like __builtin_memset: functions which are specially recognized and handled by the compiler. __cxa_begin_catch is a different sort of function, it is a support function which is called by the compiler. For the complete set of those functions, look at libgcc*.* and libsupc++.a in the installed tree (generally under lib/gcc/TARGET/VERSION). Those functions generally have no user-level documentation. Ian
Re: Volatile operations and PRE
Andrew Haley <[EMAIL PROTECTED]> writes: > > 2006-11-07 Paolo Bonzini <[EMAIL PROTECTED]> > > > > * gimplify.c (fold_indirect_ref_rhs): Use > > STRIP_USELESS_TYPE_CONVERSION rather than STRIP_NOPS. > > Regtested x86-64-gnu-linux. The only interesting failure was > mayalias-2.c, but that also fails before the patch. This is OK for active branches. Thanks. Ian
Re: Canonical type nodes, or, comptypes considered harmful
"Bernhard R. Link" <[EMAIL PROTECTED]> writes: > * Ian Lance Taylor <[EMAIL PROTECTED]> [061108 16:15]: > > This assumes, of course, that we can build an equivalence set for > > types. I think that we need to make that work in the middle-end, and > > force the front-ends to conform. As someone else mentioned, there are > > horrific cases in C like a[] being compatible with both a[5] and a[10] > > but a[5] and a[10] not being compatible with each other, and similarly > > f() is compatible with f(int) and f(float) but the latter two are not > > compatible with each other. > > Isn't void* and anyothertype* the same case? No. What we are asking about here is which types are compatible in the sense that they may be applied to the same object. That is, these two declarations are permitted: int a[]; int a[10]; These two declarations are not permitted: void* p; int* p; This is because int[] and int[10] are compatible types, but that void* and int* are not compatible types. The relationship between int* and void* is that code is permitted to rely on an implicit conversion between the two types (in C; in C++ there is only an implicit conversion to void*, not away from it). > And how are classes and parent classes made compatible in C++? Is the > front end always making a implicit type conversion or are they 'equivalent' > in one direction? Again this is a case of type conversion, not type compatibility or equivalence. Ian
Re: Obtaining builtin function list.
Brendon Costa <[EMAIL PROTECTED]> writes: > Are there also frontend specific > builtins that I will need to handle in addition to those builtins > defined in builtins.def? No. > As for the libsupc++.a and libgcc*.* libraries. Are they compiled with > the newly generated gcc/g++ compilers or are they compiled with the > compiler used to build gcc and g++? They are compiled with the newly generated gcc/g++ compilers. Ian
Re: Planned LTO driver work
Mark Mitchell <[EMAIL PROTECTED]> writes: > 1. Add a --lto option to collect2. When collect2 sees this option, > treat all .o files as if they were .rpo files and recompile them. We > will do this after all C++ template instantiation has been done, since > we want to optimize the .o files after the program can actually link. > > 2. Modify the driver so that --lto passes -flto to the C front-end and > --lto to collect2. Sounds workable in general. I note that in your example of gcc --lto foo.c bar.o this presumably means that bar.o will be recompiled using the compiler options specified on that command line, rather than, say, the compiler options specified when bar.o was first compiled. This is probably the correct way to handle -march= options. I assume that in the long run, the gcc driver with --lto will invoke the LTO frontend rather than collect2. And that the LTO frontend will then open all the .o files which it is passed. Ian
Re: Planned LTO driver work
Mark Mitchell <[EMAIL PROTECTED]> writes: > > I assume that in the long run, the gcc driver with --lto will invoke > > the LTO frontend rather than collect2. And that the LTO frontend will > > then open all the .o files which it is passed. > > Either that, or, at least, collect2 will invoke LTO once with all of the > .o files. I'm not sure if it matters whether it's the driver or > collect2 that does the invocation. What do you think? I think in the long run the driver should invoke the LTO frontend directly. The LTO frontend will then presumably emit a single .s file. Then the driver should invoke the assembler as usual, and then the linker. That will save a process--if collect2 does the invocation, we have to run the driver twice. Bad way: gcc collect2 gcc <-- there is the extra process lto1 as ld Good way: gcc lto1 as collect2 ld (or else we have to teach collect2 how to invoke as directly, which just sounds painful). > In any case, for now, I'm just trying to move forward, and the collect2 > route looks a bit easier. If you're concerned about that, then I'll > take note to revisit and discuss before anything goes to mainline. No worries on my part. Ian
Re: Abt long long support
"Mohamed Shafi" <[EMAIL PROTECTED]> writes: > and one more thing. In the dumps i noticed that before using a > register in DI mode they are all clobbred first, like > > (insn 30 54 28 6 (clobber (reg:DI 34)) -1 (nil) > (nil)) > > What is the use of this insns ... Why do we need to clobber these > registers befor the use? After some pass they are not seen in the > dump. It's a hack to tell the flow pass that the register is not used before it is set. Otherwise when the code initializes half of the register, flow will think that the other half is live, perhaps having been initialized before the function started. The clobber tells flow that the register is completely dead, and is initialized one half at a time. Ian
Re: Canonical type nodes, or, comptypes considered harmful
Mike Stump <[EMAIL PROTECTED]> writes: > On Nov 8, 2006, at 7:14 AM, Ian Lance Taylor wrote: > > The way to canonicalize them is to have all equivalent types point to > > a single canonical type for the equivalence set. The comparison is > > one memory dereference and one pointer comparison, not the current > > procedure of checking for structural equivalence. > > Once not equal addresses might mean equal types, you have to do a > structure walk to compare types, and you're right back were we > started. The only way to save yourself, is to be able to say, > different addresses, _must_ be different types. I have no idea what you mean by this. I meant something very simple: for every type, there is a TYPE_CANONICAL field. This is how you tell whether two types are equivalent: TYPE_CANONICAL (a) == TYPE_CANONICAL (b) That is what I mean when I saw one memory dereference and one pointer comparison. > An example, are these two types the same: > > A > B > > given that A and B are the same type. Your way, you need to walk two > trees, hitting memory 40 times. No. When you create *A, you also create * (TYPE_CANONICAL (A)) (this may be the same as *A, of course). You set TYPE_CANONICAL (*A) to that type. And the same for *B. Since TYPE_CANONICAL (A) == TYPE_CANONICAL (B) by assumption, you make sure that TYPE_CANONICAL (*A) == TYPE_CANONICAL (*B). Ian
Re: [m32c-elf] losing track of register lifetime in combine?
DJ Delorie <[EMAIL PROTECTED]> writes: > I compared the generated code with an equivalent explicit test, > and discovered that gcc uses a separate rtx for the intermediate: > > i = 0xf; > if (j >= 16) > { > int i2; > i2 = i >> 8; > i = i2 >> 8; > j -= 16; > } > > This seems to avoid the combiner problem, becuase you don't have the > same register being set and being used in one insn. Does this explain > why combine was having a problem, or was this a legitimate thing to do > and the combiner is still wrong? Using a temp in the expander works > around the problem. Interesting. Using a temporary is the natural way to implement this code. But not using a temporary should be valid. So I think there is a bug in combine. But since using a temporary will give more CSE opportunities, I think you should use a temporary. And you shouldn't worry about fixing combine, since all that code is going to have to change on dataflow-branch anyhow. (Actually it probably just works on dataflow-branch.) Ian
Re: [m32c-elf] losing track of register lifetime in combine?
"Dave Korn" <[EMAIL PROTECTED]> writes: > On 10 November 2006 07:13, Ian Lance Taylor wrote: > > > DJ Delorie <[EMAIL PROTECTED]> writes: > > > >> I compared the generated code with an equivalent explicit test, > >> and discovered that gcc uses a separate rtx for the intermediate: > >> > >> i = 0xf; > >> if (j >= 16) > >> { > >> int i2; > >> i2 = i >> 8; > >> i = i2 >> 8; > >> j -= 16; > >> } > >> > >> This seems to avoid the combiner problem, becuase you don't have the > >> same register being set and being used in one insn. Does this explain > >> why combine was having a problem, or was this a legitimate thing to do > >> and the combiner is still wrong? Using a temp in the expander works > >> around the problem. > > > > Interesting. Using a temporary is the natural way to implement this > > code. But not using a temporary should be valid. So I think there is > > a bug in combine. > > > Doesn't this just suggest that there's a '+' constraint modifier missing > from an operand in a pattern in the md file somewhere, such as the one that > expands the conditional in the first place? Not necessarily. I would guess that it's a define_expand which generates a pseudo-register and uses it as (set (reg) (ashiftrt (reg) (const_int 8))) That is OK. In any case a '+' constraint doesn't make any difference this early in the RTL passes. combine doesn't look at constraints. Ian
Re: Abt RTL expression - combining instruction
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > 1. Does attribute length affect the calculation of offset? It does if you tell it to. The "length" attribute must be managed entirely by your backend. Most backends with variable size branches use the length attribute to select which branch insn to generate. The usual pattern is to call get_attr_length and use that to pick the assembler instruction. For example, jump_compact in sh.md. Who wrote the backend that you are modifying? Why can't you ask them? Ian
Re: Abt long long support
"Mohamed Shafi" <[EMAIL PROTECTED]> writes: > (insn 94 91 95 6 (set (reg:SI 12 a4) > (mem/c:SI (reg:SI 12 a4) [0 D.1863+0 S4 A32])) 15 {movsi_load} (nil) > (nil)) > > (insn 95 94 31 6 (set (reg:SI 13 a5 [orig:12+4 ] [12]) > (mem/c:SI (plus:SI (reg:SI 12 a4) > (const_int 4 [0x4])) [0 D.1863+4 S4 A32])) 15 {movsi_load} > (nil) > (nil)) > I am not sure whether this is because of reload pass or global > register allocation. If those two instructions appear for the first time in the .greg dump file, then they have been created by reload. > 1. What could be the reason for this behavior? I'm really shooting in the dark here, but my guess is that you have a define_expand for movdi that is not reload safe. You can do this operation correctly, you just have to reverse the instructions: load a5 from (a4 + 4) before you load a4 from (a4). See, e.g., mips_split_64bit_move in mips.c and note the use of reg_overlap_mentioned_p. Ian
Re: Abt RTL expression
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > (insn 106 36 107 6 (set (reg:SI 13 a5) > (const_int -20 [0xffec])) 17 {movsi_short_const} (nil) > (nil)) > > (insn 107 106 108 6 (parallel [ > (set (reg:SI 13 a5) > (plus:SI (reg:SI 13 a5) > (reg/f:SI 14 a6))) > (clobber (reg:CC 21 cc)) > ]) 29 {addsi3} (nil) > (expr_list:REG_EQUIV (plus:SI (reg/f:SI 14 a6) > (const_int -20 [0xffec])) > (nil))) > > (insn 108 107 38 6 (set (reg:SI 13 a5) > (mem/c:SI (reg:SI 13 a5) [0 S4 A32])) 15 {movsi_load} (nil) > (nil)) > > My Deductions: > 1. In insn 106, we are storing -16 in to the register 13 (a5). Yes. > 2. In insn 107, we are taking the value from register 14 (a6) which is > a pointer and subtracting 16 from it and storing in a5. Yes. > Now a6 contains the stack pointer. Therefore a5 now contains SP-16. > > 3. In insn 108, we are storing the value pointed by the register a5 in to a5. I would describe it as a load from memory, but, yes. >Is my deduction for insn 108 right? >If it is right, shouldn't the expression be like this: > (mem/c:SI (reg/f:SI 13 a5) [0 S4 A32])) 15 {movsi_load} (nil) Yes, probably it should. You neglected to say which dump you are looking at. REG_POINTER, which is the flag that generates the /f, is not reliable after reload. Does it matter? In a memory load, the register has to hold a pointer value anyhow, so I don't see how it could matter for code generation. REG_POINTER exists because on the PA addresses which use two registers need to know which one is the pointer and which is the offset, for some hideous reason which I hope I never learn. In a memory address with only one address REG_POINTER doesn't seem like an interesting flag. Ian
Re: Question on tree-nested.c:convert_nl_goto_reference
[EMAIL PROTECTED] (Richard Kenner) writes: > I have a test case (involving lots of new code that's not checked in yet) > that's blowing up with a nonlocal goto and I'm wondering how it ever worked > because it certainly appears to me that DECL_CONTEXT has to be copied > from label to new_label. But it isn't. So how are nonlocal gotos > working? I think they mostly work because the DECL_CONTEXT of a label isn't very important. As far as I know we only use it to make sure the label is emitted. But I do get a failure in verify_flow_info with the appended test case. verify_flow_info is only used when checking is enabled, so maybe that is why people aren't seeing it? Maybe we just need to add this test case to the testsuite? Ian int main () { int f1 () { __label__ lab; int f2 () { goto lab; } return f2 () + f2 (); lab: return 2; } if (f1 () != 2) abort (); exit (0); }
Re: Planned LTO driver work
Mark Mitchell <[EMAIL PROTECTED]> writes: > Though, if we *are* doing the template-repository dance, we'll have to > do that for a while, declare victory, then invoke the LTO front end, > and, finally, the actual linker, which will be a bit complicated. It > might be that we should move the invocation of the real linker back into > gcc.c, so that collect2's job just becomes generating the right pile of > object files via template instantiation and static > constructor/destructor generation? For most targets we don't need to invoke collect2 at all anyhow, unless the user is using -frepo. It's somewhat wasteful that we always run it. Moving the invocation of the linker into the gcc driver makes sense to me, especially if it we can skip invoking collect2 entirely. Note that on some targets, ones which do not use GNU ld, collect2 does provide the feature of demangling the ld error output. That facility would have to be moved into the gcc driver as well. Ian
Re: expanding __attribute__((format,..))
"Nuno Lopes" <[EMAIL PROTECTED]> writes: > I've been thinking that it would be a good idea to extend the current > __attribute__((format,..)) to use an arbitrary user callback. > I searched the mailing list archives and I found some references to > similar ideas. So do you think this is feasible? I think it would be nice. We usually founder on trying to provide a facility which can replace the builtin printf support, since printf is very complicated. I kind of liked this idea: http://gcc.gnu.org/ml/gcc-patches/2005-07/msg00797.html but of course it was insane. And then there was this idea, which I think was almost workable: http://gcc.gnu.org/ml/gcc/2005-08/msg00469.html But nobody really liked it. So you need to find something which is on the one hand very simple and on the other hand able to support the complexity which people need in practice. Ian
Re: Question on tree-nested.c:convert_nl_goto_reference
[EMAIL PROTECTED] (Richard Kenner) writes: > > But I do get a failure in verify_flow_info with the appended test case. > > Indeed that's where I get the ICE. > > > verify_flow_info is only used when checking is enabled, so > > maybe that is why people aren't seeing it? > > But isn't that the default on the trunk? Yes. But it's not on releases, so non-developers are going to see it. And I can't find any C test cases which detect the problem. As far as I can tell, in C the problem will only arise when a nested function itself contains a nested function, and the inner nested function does a non-local goto to the outer nested function. That is, the test case I posted earlier is about as simple as it gets. I don't know whether there are any functions nested inside nested functions which do non-local gotos in the Ada testsuite. Ian
Re: strict aliasing question
Howard Chu <[EMAIL PROTECTED]> writes: > Daniel Berlin wrote: > > > > We ask the TBAA analyzer "can a store to a short * touch i. > > In this case, it says "no", because it's not legal. > > > If you know the code is not legal, why don't you abort the compilation > with an error code? It's not actually that easy to detect the undefined cases. Sometimes it's easy, sure. But most times it is not. The compiler does not normally do the sort of analysis which is required. That said, one of my co-workers has developed a patch which detects most aliasing violations, based on the compiler's existing alias analysis. It is able to give warnings for a wide range of cases which the compiler does not currently detect, for a relatively small increase in compilation time. If everything works out right, we'll propose it for gcc 4.3. Ian
Re: strict aliasing question
Howard Chu <[EMAIL PROTECTED]> writes: > Here's a different example, which produces the weaker warning > warning: type-punning to incomplete type might break strict-aliasing rules > > struct foo; > > int blah(int fd) { > int buf[BIG_ENOUGH]; > void *v = buf; > struct foo *f; > > f = v; > f = (struct foo *)buf; > > init(f, fd); > munge(f); > flush(f); > } > > "foo" is an opaque structure. We have no idea what's inside, we just > know that it's relatively small. There are allocators available that > will malloc them for us, but we don't want to use malloc here because > it's too slow, so we want to reserve space for it on the stack, do a > few things with it, then forget it. > > If we go through the temporary variable v, there's no warning. If we > don't use the temporary variable, we get the "might break" message. In > this case, nothing in our code will ever dereference the pointer. Why > is there any problem here, considering that using the temporary > variable accomplishes exactly the same thing, but requires two extra > statements? Since you don't do any loads or stores via buf, this code is going to be OK. The warning you get is not all that good since it gives both false positives and (many) false negatives. Your code will be safe on all counts if you change buf from int[] to char[]. The language standard grants a special exemption to char* pointers. Without that exemption, it would be impossible to write malloc in C. Ian
Re: Threading the compiler
"Dave Korn" <[EMAIL PROTECTED]> writes: > > The main place where threading may make sense, especially > > with LTO, is the linker. This is a longer lived task, and > > is the last step of compilation, where no other parellel > > processes are active. Moreover, linking tends to be I/O > > intensive, so a number of threads will likely be blocked > > for I/O. > > I'm not really sure how this would play with SMP (as opposed to threading). > I don't see why you think threading could be particularly useful in the > linker? It's the pipeline of compiler optimisation passes that looks like an > obvious candidate for threading to me. It's irrelevant to the main discussion here, but in fact there is a fair amount of possible threading in the linker proper, quite apart from LTO. The linker spends a lot of time reading large files, and the I/O wait can be parallelized. And the linker spends a reasonable amount of time computing relocations, which can be parallelized such that the relocations for each input file are computed independently. Ian
Re: Threading the compiler
"Dave Korn" <[EMAIL PROTECTED]> writes: > > It's irrelevant to the main discussion here, but in fact there is a > > fair amount of possible threading in the linker proper, quite apart > > from LTO. The linker spends a lot of time reading large files, and > > the I/O wait can be parallelized. > > That's not even thread-worthy, that just required bog-standard AIO > techniques. Um, /are/ there suitably portable AIO techniques? I guess the > answer is going to be "Yeah, spawn a posix thread and make an ordinary > synchronous f* io call in there"! Heh. There is a POSIX standard: aio_read and friends. However, on GNU/Linux, aio_read just does the read in a different thread anyhow. So aio_read and friends are just a slightly simpler threading interface. Ian
Re: [M32C-ELF] Correct way of setting reset and interrupt vectors
Florian Pose <[EMAIL PROTECTED]> writes: > 1) What is the correct way to set the reset vector? Wrong mailing list. This mailing list is for development of the gcc compiler itself. You could try the [EMAIL PROTECTED] mailing list, but I think you'll have better luck if you can find some forum for M32C programming. Ian
Re: GCC_4.2: libstdc++-v3/config/ missing files (linker-map.gnu)
"Hector Oron" <[EMAIL PROTECTED]> writes: > When cross compiling GCC version 4.2 (Debian way). I'm missing, > libstdc++-v3/config/linker-map.gnu > > Are those moved somewhere else? I can not find any changelog or > something telling about it. > > There is a bug thread at Debian bug tracking system[1]. > > Should it be filed a bug against GCC-4.2 ? In what sense is the file missing? That file did exist in 4.1, but after the 4.1 release it moved to libstd++-v3/config/abi/pre/gnu.ver See http://gcc.gnu.org/ml/gcc-patches/2005-12/msg01377.html But all the references were updated, so why does it matter? Ian
Re: gpl version 3 and gcc
"Ed S. Peschko" <[EMAIL PROTECTED]> writes: > And in any case, why should it be off-topic? I would think that > the possibility of your project being divided in two would be of > great concern to you guys, and that you'd have every single motivation to > convey any sort of apprehension that you might have about such a split > to the group that could prevent it. After all - lots of you are putting > a great effort into GNU software basically gratis... (I'll post on this once, even though it is off-topic. I apologize if this seems excessively inappropriate.) None of us think that our project is going to be divided in two. 1) The license of gcc does not carry over to the license of code compiled with gcc. gcc has been used for many years to compile proprietary code which runs on proprietary systems. It follows that there is absolutely nothing wrong with using a GPLv3 gcc to compile GPLv2 code on GPLv2 systems. 2) Every person who has contributed a patch of any significance at all to gcc has signed a paper granting the FSF the rights to the code, including the right to release the code under any free software license. It follows that people who are vitally concerned about the possibility of a license change to gcc within the bounds of free software are not, in general, contributors to gcc. I appreciate your need to raise the alarm about GPLv3. But I don't think that gcc is a useful area. gcc is and always been owned by the Free Software Foundation. That fact comes with certain implications, including the prospect of future changes to the GPL (gcc in fact already went through the GPLv1 to GPLv2 change, not that that was a big deal). Contributors to gcc already faced these issues long ago. Ian
Re: gcc3.4.6: std::min and std::max
"BG / Galaxy" <[EMAIL PROTECTED]> writes: > I recently upgraded my system from Solaris2.5.1/gcc3.3.2 to > Solaris7/gcc3.4.6. > When recompiling my project, I often get errors in gcc files complaining > about lines using std::min or std::max, > ie: ... > /usr/local/lib/gcc/sparc-sun-solaris2.7/3.4.6/../../../../include/c++/3.4.6/ > bits/stl_bvector.h:522: error: expected unqualified-id before '(' token Wrong mailing list. Use [EMAIL PROTECTED] for these sorts of questions. The most likely explanation is that some header file you include before is doing #define max ... and thus breaking the use of std::max. Please follow up to gcc-help. Thanks. Ian
Re: Bug in multiple register reload inheritance?
Rask Ingemann Lambertsen <[EMAIL PROTECTED]> writes: >Here something has gone wrong, and the parameters to > subreg_regno_offset() are invalid: > > (gdb) frame 1 > #1 0x08504786 in subreg_regno_offset (xregno=9, xmode=HImode, offset=2, > ymode=HImode) at rtlanal.c:3017 > >If I take out the lines 5643 and 5644 > > if (regno < FIRST_PSEUDO_REGISTER) > regno = subreg_regno (rld[r].in_reg); > > it will instead get regno = 10, mode = SImode and last_reg = (reg:SI 8 si) > and call subreg_regno_offset (xregno=8, xmode=SImode, offset=2, ymode=HImode) > which is fine and returns 1. I have to agree that this looks rather dubious. It seems to me that when we increment regno in the lines above, we need to reset byte to 0. This bug, if it is a bug, has been there since this code was introduced: Fri Oct 16 20:40:50 1998 J"orn Rennecke <[EMAIL PROTECTED]> It seems possible that it could be a bug, since a SUBREG of a hard register is unusual. Ian
Re: Objects in c++ and objc
Come Lonfils <[EMAIL PROTECTED]> writes: > I need precise information about how GCC store the objects for c++ > and objective-C. I'm trying to know what are the difference between > both. I need a very precise description of both. Do you know where I > can found this information (except in gccint), books or other? Maybe > it's not the best place to ask but I find this information nowhere. For C++, you can find the precise description here: http://codesourcery.com/cxx-abi/ It says it is for the Itanium, but gcc uses it for everything. For Objective C, I don't know. Ian
Re: PPC440, GCC-4.1.1 supposes cr{2,3,4} saved but the hard real time kernel doesn't...
Etienne Lorrain <[EMAIL PROTECTED]> writes: > My problem is quite simple, the PPC has few conditions registers and some are > assumed to be saved over function calls (in my test case NU_Sleep()), but the > hard real time kernel do not save those (partial flags) registers. > This behaviour is perfectly documented in gcc-4.1.1/gcc/config/rs6000 line > 709, but is there a simple solution (maybe involving recompiling the > compiler) to force re-testing values after function calls? > I do remember the i386 option -mreg-alloc="dacbSDB", but it does not seem to > be supported for PPC; the best would be a compiler option to say "full CC > register clobber by function call". You want the -fcall-used-REG option. See the documentation. Ian
Re: Queries regarding calls to divmod assembly functions
Daniel Towner <[EMAIL PROTECTED]> writes: > This email and any files transmitted with it are confidential and intended > solely for the use of the individuals to whom they are addressed. If you have > received this email in error please notify the sender and delete the message > from your system immediately. I started to reply, then I noticed this disclaimer. Please do not send e-mail with these disclaimers to any mailing lists hosted at gcc.gnu.org. We are unable to comply with these terms. If you are unable to change your mailer, then please obtain a free hosted e-mail account, for example at gmail.com. Thanks. This is noted at http://gcc.gnu.org/lists.html. Ian
Re: Queries regarding calls to divmod assembly functions (sans disclaimer)
Daniel Towner <[EMAIL PROTECTED]> writes: > Initially, I tried to do as the manual suggested, and omit any patterns > for div/mod, to force gcc to use divmod instead, and setup the divmod > optab to call a named assembly function. However, div would still call a > div library, instead of the divmod library. So I set the div optab > library function to null. Unfortunately this didn't appear to work; > while the absence of any instruction pattern or library function works > for mod (it generates a call to the appropriate library), the div > expansion (in expand_divmod) seems to fail if no library or no > instruction pattern is available, and leaves the function without trying > a divmod library call. This results in the last part of expand_divmod > trying to call gen_lowpart on a NULL_RTX. Is that correct? It seems easy > to add a call to a divmod library as a last resort, but will that > adversely affect other ports? The missing call to a divmod libfunc does seem like a bug. I don't see how it could be a bad idea for other ports, since, as you say, the compiler simply fails in that case. > Next I tried to make the divmodM4 patterns generate a call directly to > my divmod function by having a define_expand which invokes > expand_twoval_binop_libfunc. This means that when the expand_divmod > function is run, it finds the divmod instruction, which it then > indirectly uses to generate the divmod library call. This appears to > work as I expect. Are there any gotchas I should be aware of in doing this? I don't think you can always safely call expand_twoval_binop_libfunc from the expander. That probably won't do the right thing if some later part of the compiler decides to generate a divmod insn. I think the only safe thing you can do is to set up the function arguments and do the call yourself. That should be easy in a define_expand if the parameters can be passed in registers--and that might be a good idea for the divmod function even if it is not the normal calling convention. > Now I have two final queries with this scheme. Firstly, if the source > code contains a div and a mod in quick succession, then I end up with > two calls to my divmod routine, although the instruction sequences to > set up the two calls are identical. Are there any tweaks I can apply to > allow the result of the first call to be reused, ratehr than calling the > function again? Secondly, the SI mode version of the function returns > the two data values by passing in an address to which to write the > values. I can make the functions smaller and faster by changing the > calling convention for just that function (i.e., return in registers > instead), but how should I go about fixing the gcc end to call the > functions differently? Should I explicitly emit the instructions to set > up the call in the divmodsi4 define_expand, or should I change the ABI > to be different for that one function, if possible? Change the define_expand. This would be a good reason to use a define_expand rather than a libfunc. I would like to think that gcc would be able to eliminate a pair of div/mod calls. The way you are doing it may require emitting some explicit REG_EQUAL notes for both results of the function call. Ian
Re: Re : PPC440, GCC-4.1.1 supposes cr{2,3,4} saved but the hard real time kernel doesn't...
Etienne Lorrain <[EMAIL PROTECTED]> writes: > powerpc-eabi-gcc -Wall -W -O2 -g -fno-strict-aliasing -ffunction-sections > -fdata-sections -fno-schedule-insns -std=gnu99 -fcall-used-cr2 > -fcall-used-cr3 -fcall-used-cr4 -Xassembler -mregnames ... *.c > > ../net/src/net_dbg.c:668: error: Attempt to delete prologue/epilogue insn: > (insn 238 237 239 22 ../net/src/net_dbg.c:668 (set (reg:SI 12 12) > (mem:SI (plus:SI (reg/f:SI 1 1) > (const_int 12 [0xc])) [0 S4 A8])) -1 (nil) > (nil)) > ../net/src/net_dbg.c:668: internal compiler error: in propagate_one_insn, at > flow.c:1699 > Please submit a full bug report, > with preprocessed source if appropriate. > See http://gcc.gnu.org/bugs.html> for instructions. Looks like the code in rs6000_stack_info in config/rs6000/rs6000.c does not consider the possibility of the -fcall-used option. It checks regs_ever_live without checking call_used_regs. I don't have a solution to suggest, other than patching the compiler. Ian
Re: [PATCH] Canonical types (1/3)
"Steven Bosscher" <[EMAIL PROTECTED]> writes: > On 11/28/06, Doug Gregor <[EMAIL PROTECTED]> wrote: > > * tree.h (TYPE_CANONICAL): New. > > (TYPE_STRUCTURAL_EQUALITY): New. > > (struct tree_type): Added structural_equality, unused_bits, > > canonical fields. > > If I understand your patches correctly, this stuff is only needed for > the C-family languages. So why steal two pointers on the generic > struct tree_type? Are you planning to make all front ends use these > fields, or is it just additional bloat for e.g. Ada, Fortran, Java? > ;-) It seems to me that all the frontends should use these fields. Hopefully we can use it as a lever to eliminate the types_compatible_p langhook. Ian
Re: Aliasing: reliable code or not?
Andrew Pinski <[EMAIL PROTECTED]> writes: > Here is how I would write it so you can get the best results: > > char *foo(char *buf) > { > short temp; > int temp1; > *buf++=42; > temp = 0xfeed; > memcpy(buf, &temp, sizeof(temp)); > buf+=sizeof(temp); > temp1 = 0x12345678; > memcpy(buf, &temp1, sizeof(temp1)); > buf+=sizeof(temp1); > temp1 = 0x12345678; > memcpy(buf, &temp1, sizeof(temp1)); > buf+=sizeof(temp1); > return buf; > } Or you can use constructor expressions to make this slightly more elegant, though I retain the assumptions about type sizes: char *foo1(char* buf){ memcpy(buf, (char[]) { 42 }, 1); buf += 1; memcpy(buf, (short[]) { 0xfeed }, 2); buf += 2; memcpy(buf, (int[]) { 0x12345678 }, 4); buf += 4; memcpy(buf, (int[]) { 0x12345678 }, 4); buf += 4; return buf; } Ian
Re: Finding canonical names of systems
"Ulf Magnusson" <[EMAIL PROTECTED]> writes: > I'd be happy to contribute some documentation on this. I just hope I > have a firm enough grip on the issue. Where should I send drafts for > review? Is there some other resource I should be aware of besides > http://gcc.gnu.org/contribute.html? Thanks for offering. Patches should be sent to [EMAIL PROTECTED] There is nothing special to be aware of for documentation patches. You just need to make sure that they pass texinfo. Ian
Re: Announce: MPFR 2.2.1 is released
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes: > I'm not sure to follow Diego and I am a bit concerned about other > potential external libraries. Suppose for example that some GCC code > uses an external library like the Parma Polyedral Library > http://www.cs.unipr.it/ppl/ (which is very useful for sophisticated > static analysis) or the Libtool Dynamic Loader, which might be useful > also, for example if the compiler needs to generate some specialized > (w.r.t. to the compiled source code) code. Maybe my position is > unusual, but I believe that more and more external libraries or > softwares will become required or useful to GCC. If we can link against fixed, stable, distributed versions of the supporting libraries, then I think I agree with you. Or, alternatively, if the supporting libraries are optional. The problems we're seeing with MPFR is that we seem to require a changing, undistributed version. I don't have a problem with requiring changing versions of tools which are only needed by developers, such as autoconf. I think it is problematic to require shifting versions of tools which are needed by everybody who builds gcc, though of course I agree that is a much smaller set of people than the set of people who use gcc. This all may just be a shakedown problem with MPFR, and maybe it will stabilize shortly. But it's disturbing that after one undistributed version became a requirement, we then very shortly stepped up to a new undistributed version. I think it should be obvious that if we require an external library which is not in the tree, we should not be at the bleeding edge of that library. I also want to note that using external libraries is going to give us problems reproducing bugs, when different users/distributors use different versions of those libraries. At the very least we need to report the version number of each external library in gcc --version. Ian
Re: DBX format support
"RAHUL V R" <[EMAIL PROTECTED]> writes: > I am working on adding a new data type in gcc under C. > > Please tell me, if I don't want to use the debugging info/format in > DBX, but still I want to build gcc in cygwin what changes should be > made on dbxout.c? > Is it compulsory that I have to provide support in dbxout.c? Changing dbxout.c is only compulsory if you want to support -gstabs. If you want to contribute your changes back (from your brief description, that would not be an easy sell), then you need to make sure that dbxout.c doesn't crash, but you don't necessarily need to put full support there. Ian
Re: Announce: MPFR 2.2.1 is released
Paul Brook <[EMAIL PROTECTED]> writes: > > This all may just be a shakedown problem with MPFR, and maybe it will > > stabilize shortly. But it's disturbing that after one undistributed > > version became a requirement, we then very shortly stepped up to a new > > undistributed version. I think it should be obvious that if we > > require an external library which is not in the tree, we should not be > > at the bleeding edge of that library. > > I thought we were going from an unreleased version to the subsequent release. As far as I know both versions are released. What I said was "undistributed," by which I mean: the required version of MPFR is not on my relatively up to date Fedora system. Ian
Re: Implementation of C++ N2053?
Beman Dawes <[EMAIL PROTECTED]> writes: > I've proposed adding raw string literals to C++. See > http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2053.html Interesting idea. I think there is a misspelling of std::ispunct in there. > So far, the changes to accommodate raw string literals have only > touched libcpp/charset.c and libcpp/lex.c, particularly lex.c's > lex_string() function. My initial thought is to also handle embedded > newlines within lex_string(), but before attempting that approach I'd > like advice from GCC maintainers familiar with GCC's lexical > processing. > > Who maintains libcpp/lex.c? What is the best way to proceed? You can find maintainers of any part of gcc by looking in the file MAINTAINERS (look for "cpplib" in this case). It is also often helpful to look at the appropriate ChangeLog file to see who has changed the code. >From a lexing perspective, these raw string literals are more akin to /**/ comments then they are to strings. Personally, I would recommend writing a function along the lines of _cpp_skip_block_comment, and call that either from lex_string or _cpp_lex_direct. Ian
Re: DBX format support
"RAHUL V R" <[EMAIL PROTECTED]> writes: > > On 05 Dec 2006 07:05:33 -0800, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: > > "RAHUL V R" <[EMAIL PROTECTED]> writes: > > > I am working on adding a new data type in gcc under C. > > > Please tell me, if I don't want to use the debugging info/format in > > > DBX, but still I want to build gcc in cygwin what changes should be made > > > on > > > dbxout.c? Is it compulsory that I have to provide support in dbxout.c? > > Changing dbxout.c is only compulsory if you want to support -gstabs. > > Please refer the following test program: > " > int main(){ >int j=10; >return 0;} > " > > When the above program was compiled with 'gstabs' [using gcc 4.1.1], > I got assembly code with debugging info (for DBX) like: Why are you compiling with -gstabs? Why not use DWARF? > .stabs "int:t(0,1)=r(0,1);-2147483648;2147483647;",128,0,0,0 > .stabs "char:t(0,2)=r(0,2);0;255;",128,0,0,0 > .stabs "long int:t(0,3)=r(0,3);-2147483648;2147483647;",128,0,0,0 > : > : > .stabs "float:t(0,12)=r(0,1);4;0;",128,0,0,0 > .stabs "double:t(0,13)=r(0,1);8;0;",128,0,0,0 > .stabs "long double:t(0,14)=r(0,1);8;0;",128,0,0,0 > : > : > " > Why is it' ."float:t(0,12)=r(0,1) ..", > ."double:t(0,13)=r(0,1) .." ' ? http://sources.redhat.com/gdb/current/onlinedocs/stabs_5.html#SEC33 > (Why float(12), double(13), long double(14) is equated to int type(1)?) > Or why is it not ' ."float:t(0,12)=r(0,12) ..", > ..."double:t(0,13)=r(0,13) ." ..' ? It's convention. Builtin types can always be described as subranges of 'int'. > What should be done in case if a new data type is added? > For example what additions/changes will be made in the above function > for Decimal Float types and Fixed Point Types in gcc 4.3/4.4? First you have to define what those types should look like in stabs format. Then you have to change gcc to generate that format, and change gdb to read that format. I know this is not helpful, but unfortunately stabs doesn't have any way to define a type like decimal float or fixed point. You are going to have to decide what it should do. One possibility would be to invent a new floating point type, and use something like "R7;8;". > > If you want to contribute your changes back (from your brief description, > > that would not be an easy sell), then you need to make sure that dbxout.c > > doesn't > > crash, but you don't necessarily need to put full support there. > > Please explain what you mean by "not giving full support" I was assuming that you didn't care about -gstabs. Most people don't use it these days. Ian
Re: 32 bit jump instruction.
David Daney <[EMAIL PROTECTED]> writes: > > I am working on a private target where jump instruction patterns are > > similiar to this > > > > jmp <24 bit offset> > > jmp for 32 bit offsets > > > > if my offset is greater than 24 bits, then i have to move the offset > > to an address register. But inside the branch instruction (in md > > file), i am not able to generate a pseudo register because the > > condition check for "no_new_pseudos " fails. > > > > Can any one suggest a way to overcome this? > > > This is similar to how the MIPS works. Perhaps looking at its > implementation would be useful. MIPS simply reserves a register, $1. $1 is by convention reserved for use in assembler code. gcc uses it for a long branch. If you can't afford to lose a register, then I think your only option is to pick some callee-saved register and have each branch instruction explicitly clobber it. Then it will be available for use in a long branch, and it will be available for use within a basic block. This is far from ideal, but I don't know a better way to handle it within gcc's current framework. Ian
Re: Understanding some EXPR nodes.
Brendon Costa <[EMAIL PROTECTED]> writes: > The nodes that have me a little confused are: > > TRY_CATCH_EXPR > TRY_FINALLY_EXPR > MUST_NOT_THROW_EXPR > EH_FILTER_EXPR Yes, those are a bit mysterious. > TRY_CATCH_EXPR/TRY_FINALLY_EXPR > When code generated from these nodes encounter an exception while > processing code from operand 0 is there an implicit rethrow of that > exception at the end of the block of code given by operand 1 or does it > "sink" the exception and only rethrow it if the user specifically > requests it (In C++ anyway)? If operand 0 throws an exception, there is an implicit rethrow after executing operand 1. (Of course, operand 1 can prevent that rethrow by doing its own throw, or by calling a function which does not return, etc.). > In what situations are these nodes generated? TRY_CATCH_EXPR is generated for C++ try { ... } catch (...) { } TRY_FINALLY_EXPR is generated for class C { ~C(); }; ... { C c; ... } to ensure that C's destructor is run. And of course they can be generated for other languages as well. > MUST_NOT_THROW_EXPR > What sort of code produces one of these nodes? They do not seem to be > used for the throw() specifiers for a function (At least in C++) as i > would have expected. MUST_NOT_THROW_EXPR is a C++ specific code. It is used for a few places which must not throw, like the copy constructor of a catch parameter. It is eliminated during gimplification. throw() is handled via EH_SPEC_BLOCK, which is another C++ specific code eliminated during gimplification. > EH_FILTER_EXPR > In what situations are these nodes generated? EH_FILTER_EXPR is generated when EH_SPEC_BLOCK and MUST_NOT_THROW_EXPR are gimplified. > I assume that the code > that these filters applies to is external to the node and if an > exception occurs in this external code that does not match any of the > types in the EH_FILTER_TYPES list (Do they have to be exact matches/how > is type matching done here) then it calls the EH_FILTER_FAILURE which > could be for example a call to terminate()? EH_FILTER_EXPR will be operand 2 of a TRY_CATCH_EXPR. When the body (operand 1) of the TRY_CATCH_EXPR throws an exception, then if operand 2 is EH_FILTER_EXPR, the exception is matched against EH_FILTER_TYPES. If the exception does not match, then EH_FILTER_FAILURE is called. EH_FILTER_FAILURE is normally a call to terminate() or unexpected(). > How does the EH_FILTER_MUST_NOT_THROW() macro work? It generates an exception region such that if an exception is seen there, terminate is called. (At least, I think that is how it works.) > If it returns true > then the filter allows NO exceptions and if false then allows only > exceptions of type that are in this list? Yes. > Is it possible for the > EH_FILTER_TYPES list to be empty and EH_FILTER_MUST_NOT_THROW() to > return false? Perhaps. The two cases are essentially equivalent. Ian
Re: Understanding some EXPR nodes.
Brendon Costa <[EMAIL PROTECTED]> writes: > >>TRY_CATCH_EXPR/TRY_FINALLY_EXPR > >> > >If operand 0 throws an exception, there is an implicit rethrow after > >executing operand 1. (Of course, operand 1 can prevent that rethrow > >by doing its own throw, or by calling a function which does not > >return, etc.). > > TRY_CATCH_EXPR is generated for C++ > >try { ... } catch (...) { } > > > >TRY_FINALLY_EXPR is generated for > >class C { ~C(); }; ... { C c; ... } > >to ensure that C's destructor is run. > > > >And of course they can be generated for other languages as well. > > > > For the C++ code shown above try { ... } catch(...) {} > From memory I get a TRY_BLOCK node and not a TRY_CATCH_EXPR. TRY_BLOCK is a C++ language specific code. It is converted to TRY_CATCH_EXPR during gimplification. See genericize_try_block. > Also is the implicit rethrow just for the TRY_FINALLY_EXPR and not for > the TRY_CATCH_EXPR or is it for both of them? It's for both of them. If the body of a TRY_FINALLY_EXPR throws an exception, then the finally clause, operand 1, will be executed. If the finally clause completes normally, then the exception will be rethrown. To be clear, if the body of a TRY_FINALLY_EXPR completes normally, or executes a return statement, then the finally clause won't throw an exception, it will continue with the next statement or do a return. In other words, the finally clause is run, and then whatever else was going to happen happens. And, to be clear, if the finally clause throws an exception, then that is the exception that will be thrown, and the original exception will not be rethrown and will, in fact, be ignored. Ian
Re: Adding New Function Attributes
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > 1. Do i have to modify the GCC source base like adding a new flag in > tree_function_decl(tree.h), adding a new handler to set the flag in > c-common.h. > or can i do it from the backend itself. Do it in the backend. See TARGET_ATTRIBUTE_TABLE and friends. There are examples in several backends, including, e.g., i386. > 2. Any documentation regarding adding new function attributes? Yes, some, in the internals manual around TARGET_ATTRIBUTE_TABLE. > 3. If i have to emit an assembler directive based on the flag status, > where should i look for? I don't understand the question. > 4. If i specify the attribute at the time of function declaration and > i need to update the same flag while defining the function, where > should i look to change? I suppose you would store something in the machine_function struct for that function. Ian
Re: Unwinding CFI gcc practice of assumed `same value' regs
Andrew Haley <[EMAIL PROTECTED]> writes: > In practice, %ebp either points to a call frame -- not necessarily the > most recent one -- or is null. I don't think that having an optional > frame pointer mees you can use %ebp for anything random at all, but we > need to make a clarification request of the ABI. I don't see that as feasible. If %ebp/%rbp may be used as a general callee-saved register, then it can hold any value. And permitting %ebp/%rbp to hold any value is a very useful optimization in a function which does not require a frame pointer, since it gives the compiler an extra register to use. If you want to require %ebp/%rbp to hold a non-zero value, then you are effectively saying that this optimization is forbidden. There is no meaningful way to tell gcc "this is a general register, but you may not store zero in it." It would be a poor tradeoff to forbid that optimization in order to provide better support for exception handling: exception handling is supposed to be unusual. Ian
Re: 32 bit jump instruction.
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > > If you can't afford to lose a register, then I think your only option > > is to pick some callee-saved register and have each branch instruction > > explicitly clobber it. Then it will be available for use in a long > > branch, and it will be available for use within a basic block. This > > is far from ideal, but I don't know a better way to handle it within > > gcc's current framework. > > Can i get more clarity on this part. Is it implemented in any other backends? Not to my knowledge. > When you say "pick some callee-saved register ", is it to pick them > randomly from an available set in CALL_USED_REGISTERS or a specific > register. Well, it would be a lot easier if you use a specific register. Then you can just add a CLOBBER to the branch pattern in the MD file. Assuming your callee-saved registers are more or less equivalent, there wouldn't be any advantage to letting the compiler choose one. Ian
Re: g++ doesn't unroll a loop it should unroll
Benoît Jacob <[EMAIL PROTECTED]> writes: > I'm developing a Free C++ template library (1) in which it is very important > that certain loops get unrolled, but at the same time I can't unroll them by > hand, because they depend on template parameters. > > My problem is that G++ 4.1.1 (Gentoo) doesn't unroll these loops. > > I have written a standalone simple program showing this problem; I attach it > (toto.cpp) and I also paste it below. This program does a loop if UNROLL is > not defined, and does the same thing but with the loop unrolled by hand if > UNROLL is defined. So one would expect that with g++ -O3, the speed would be > the same in both cases. Alas, it's not: When I try it, gcc does unroll the loops. It completely unrolls the inner loop, but only partially unrolls the outer loop. The reason it doesn't completely unroll the outer loop is simply that gcc doesn't attempt to completely unroll loops which contain inner loops. This could probably be fixed: we could probably completely unroll a loop if all its inner loop were completely unrolled. I encourage you to file a bug report. See http://gcc.gnu.org/bugs.html. Ian
Re: configuration options policy (at toplevel or only inside gcc/)?
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes: > This makes life much simpler to me, but then I do not understand how end- > users compiling GCC are expected to configure it. Does this mean that the > instructions on http://gcc.gnu.org/install/configure.html are no more valid > for that case? Note that configure options beginning with --with and --enable are passed from the top level configure script to the subdirectory configure scripts. So the user just uses all the options at the top level, and the subdirectories will see them. I agree that new options should only be added at the appropriate level, but there is one disadvantage: top level configure --help will not display them. But then configure --help is kind of useless anyhow since it has so much boilerplate, so this is not a significant problem. > At last I do not understand why the MPFR & GMP stuff which has been > discussed a lot is not already under the above scheme? Why is it cheched at > toplevel and not only in gcc/ ? AFAIK the #include appears only in > gcc/real.h It's at the top level because the original implementation envisioned support for putting MPFR and GMP in the tree, alongside of the directories gcc, libcpp, etc. That may still happen. Ian
Re: configuration options policy (at toplevel or only inside gcc/)?
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes: > Le Thu, Dec 14, 2006 at 07:29:19AM -0800, Ian Lance Taylor écrivait/wrote: > > > Note that configure options beginning with --with and --enable are > > passed from the top level configure script to the subdirectory > > configure scripts. So the user just uses all the options at the top > > level, and the subdirectories will see them. > > I did notice this, but it seems to me (maybe I am wrong) that there is > no generic machinery which passes the --with & --enable of the > top-level configure to the configure in gcc subdirectory. There is > some code in Makefile.tpl for this, but each such option has to be > explicitly & individually processed. I just ran the top level configure script with --with-foobar, and then did "make configure-gcc". When I look in gcc/config.status I do see --with-foobar in there. So I think it does work as I expected. I haven't bothered to dig into exactly how it works. > > I agree that new options should only be added at the appropriate > > level, but there is one disadvantage: top level configure --help will > > not display them. But then configure --help is kind of useless anyhow > > since it has so much boilerplate, so this is not a significant > > problem. > > Still, what is the build procedure then? Do we expect users to type > two configure commands? No, definitely not. That would not be acceptable. > > > At last I do not understand why the MPFR & GMP stuff which has been > > > discussed a lot is not already under the above scheme? Why is it cheched > > > at > > > toplevel and not only in gcc/ ? AFAIK the #include appears only in > > > gcc/real.h > > > > It's at the top level because the original implementation envisioned > > support for putting MPFR and GMP in the tree, alongside of the > > directories gcc, libcpp, etc. That may still happen. > > Thanks for the explanation. But I thought it has been firmly decided > to keep GMP & MPFR outside! I don't think it has been firmly decided to not permit the user to unpack MPFR and GMP next to gcc, and have that work--the traditional one-tree build. Ian
Re: Defining cmd line symbolic literals
"Mohamed Shafi" <[EMAIL PROTECTED]> writes: > I am building a GCC Compiler. I have some ifdef checks in the compiler > source code. In case i define a symbolic literal in command line while > compiling a sample program, I want that set of statements to be > invoked after ifdef checks. > > e.g. > GCC Source: > #ifdef SHAFI_DEBUG > printf("\n Shafi Debugging!!\n"); > #endif > > compiling 1.c: > gcc -DSHAFI_DEBUG 1.c > > Is there any way to do this ? You need to add a new option to the compiler. If you think about how the preprocessor works, you will see that your suggestion of using -D can not possibly work. Ian
Re: GCC optimizes integer overflow: bug or feature?
Paul Eggert <[EMAIL PROTECTED]> writes: > What worries me is code like this (taken from GNU expr; the vars are > long long int): > > val = l->u.i * r->u.i; > if (! (l->u.i == 0 || r->u.i == 0 >|| ((val < 0) == ((l->u.i < 0) ^ (r->u.i < 0)) >&& val / l->u.i == r->u.i))) > integer_overflow ('*'); > > This breaks if signed integer overflow has undefined behavior. For what it's worth, this code works as you expect with current mainline: gcc does not remove the overflow test. (As Joseph mentioned, this can be written fully safely by casting to an unsigned type.) We can identify all the places where gcc relies on the undefinedness of signed overflow by looking for all tests of flag_wrapv (barring bugs, of course). Interestingly, a number of tests of flag_wrapv exist merely to avoid optimizations which may cause signed overflow, since the resulting code may then be optimized such that it breaks the original intent. I think it might be interesting to add -Wundefined-signed-overflow to warn about every case where gcc assumes that signed overflow is undefined. This would not be difficult. It would be interesting to see how many warnings it generates for real code. This warning should probably not be part of -Wall. Here is a quick list of optimizations that mainline gcc performs which rely on the idea that signed overflow is undefined. All the types are, of course, signed. I made have made some mistakes. I think this gives a good feel for the sorts of optimizations we can make with this assumption. * Fold (- (A / B)) to (A / - B) or (- A / B). * Fold ((- A) / B) to (A / - B), or (A / - B) to ((- A) / B) (where it seems profitable). * Fold ((A + C1) cmp C2) to (A cmp (C1 + C2)) where C1 and C2 are constants, and likewise for -. * Fold ((A + C1) cmp (B + C2)) to (A cmp (B + C1 + C2)) where C1 and C2 are constants, and likewise for -. * Fold simple comparisons like (X - C > X) to false. * Assume that abs (X) >= 0 when simplifying comparisons and divisions (we can't assume this when -fwrapv is used, because abs (INT_MIN) == INT_MIN). * Simplify abs (X) < 0 to false, and simplify abs (X) >= 0 to true. * Assume that if X >= 0 and Y >= 0, then X + Y != 0 exactly when X != 0 or Y != 0. This is used primarily when simplifying comparisons against NULL. * Assume that X * Y != 0 exactly when X != 0 and Y != 0. * In range tests, assume that (low < A + C < high) is equivalent to (low - C < A < high - C). * Transform ((A + C1) * C2), to (A + C1 * C2) even if (C1 * C2) overflows. Likewise for ((A + C1) / C2) if C1 is a multiple of C2. Likewise for - instead of +. (This seems like it might be an inappropriate optimization (extract_muldiv_1 in fold-const.c).) * Try to reduce the magnitude of the constant in comparisons: + C <= A ==> C - 1 < A. + - CST < A ==> - CST - 1 <= A. + C > A ==> C - 1 >= A. + - C >= A ==> - C - 1 > A. + A - C < B ==> A - C - 1 <= B. + A + C > B ==> A + C - 1 >= B. + A + C <= B ==> A + C - 1 < B. + A - C >= B ==> A - C - 1 > B. * -fwrapv affects handling of pointer types in some cases; I'm not really sure why. See, e.g., maybe_canonicalize_comparison in fold-const.c "In principle pointers also have undefined overflow behavior, but that causes problems elsewhere." * Assume that signed loop induction variables do not overflow. * Compute the number of iterations in a loop of the form X = INIT; X cmp END; X OP= Y by assuming that X will not wrap around. * VRP optimizations: + Assume A < A + C1. + Assume A > A - C1 and A - C1 < A. + Assume A + C1 > A + C2 <==> C1 > C2, likewise for <. + Assume A + C1 > A - C2 and A - C1 < A + C2. + Treat signed integer overflow as infinity. + Assume that - (range with low value type_MIN) becomes (range with high value type_MAX). + Assume that abs (type_MIN) != type_MIN. Ian
Re: question from imobilien
Jan Eissfeld <[EMAIL PROTECTED]> writes: > PR19978 reports that some overflow warnings are emitted multiple times. Like > for example, > > test.c:6: warning: integer overflow in expression > test.c:6: warning: integer overflow in expression > test.c:6: warning: integer overflow in expression > > The current testsuite will match any number of those to a single { > dg-warning }. I don't know whether this is a known limitation, a bug > on the testsuite or it just needs some magic. This is a known limitation in the test harness. Ian