Re: About VLIW backend
> I am interesting in it. How about the current status, is it ongoing > developing? Is it included in GCC official release? Unfortunately our group is not actively working on that right now. Because of some reasons ( mainly the paucity of time) we couldnt release it to the GCC community then ( about 3 years back). - Pranav
Re: Reload using a live register to reload into
> The file.c.176r.greg for insn 309 says > > "Spilling for insn 309. > Using reg 6 for reload 0" > > and indeed rld[0].regno is 6 and rld[0].in is > > (plus:SI (reg/f:SI 29 $sp) > (const_int 176 [0xb0])) > > However the function choose_reload_regs chooses $c1 for this reload > and sets rld[0].reg_rtx to the rtl expression for $c1. The reload pass features a few "optimizations" that can bypass the initial selection. > I am confused whether reg_rtx should always be the rtl expression for > regno ? No, not always. > Anyways shouldnt reload be checking for live registers before reloading into > them ? Of course, it goes to great length to do so but there can be bugs. You didn't specify which version of the compiler you're using though; they may have been already fixed on the mainline. -- Eric Botcazou
Progress on GCC plugins ?
Hi, Is there any progress in the gcc-plugin project ? How far is the code and what about its integration in a future release ? I still think this is a great idea and I'm quite impatient to try it out (I'll have some time in January). Regards -- Emmanuel Fleury Out the 10Base-T, through the router, down the T1, over the leased line, off the bridge, past the firewall...nothing but Net. -- Tony Miller
Re: DEBUG_INSN that is not an insn
On Nov 5, 2007, "Steven Bosscher" <[EMAIL PROTECTED]> wrote: > I am worried about some of the things going on in the > var-tracking-assignments branch. The thing that worries me most, is > the introduction of an insn that is not an instruction: > /* An annotation for variable assignment tracking. */ > DEF_RTL_EXPR(DEBUG_INSN, "debug_insn", "iuuBieie", RTX_INSN) > DEBUG_INSN is more like a note, AFAIU. In some senses, yes. In others, no. It is a note in the sense that we don't generate code for it. It's an insn in the sense that optimization passes need to adjust references in it as they modify the code elsewhere. So it's more like a USE insn, except that it's a weak USE, in that optimization passes shouldn't refrain from performing optimizations just because of such weak USEs. > I couldn't find any discussions about this idea, so I don't know if > there is "sufficient" concensus that this is the right thing to do. I don't think there's any consensus, indeed. What I have is a proposal of WIP, but I'm quite convinced that there's no better alternative. See http://gcc.gnu.org/ml/gcc/2007-11/msg00176.html for the rationale on the design I've chosen. > Also, registers mentioned in DEBUG_INSNs are counted as real uses, Except when they aren't because I've modified the counting routines so as to disregard them ;-) > which is bound to confuse some existing RTL analyses, and makes it > harder to implement new ones safely. The alternative is to keep them out of band, which makes it even more likely that they are forgotten and left out of date. And since we don't have comprehensive debug info testing infrastructure, we have silent failures. With the approach I've taken, there's still a possibility of silent failures in debug info, but it's far less likely, for the most common failure mode I've observed (with the caveat that I don't have a debug info testsuite) has been that of failure to perform optimizations because of the presence of debug annotations. -- Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/ FSF Latin America Board Member http://www.fsfla.org/ Red Hat Compiler Engineer [EMAIL PROTECTED], gcc.gnu.org} Free Software Evangelist [EMAIL PROTECTED], gnu.org}
Re: Reload using a live register to reload into
Hi Eric, Thanks for the response > Of course, it goes to great length to do so but there can be bugs. You didn't > specify which version of the compiler you're using though; they may have been > already fixed on the mainline. Oh, I am using quite a new version of the compiler - rev 129547, DATESTAMP 20071022. cheers! Pranav
Re: Reload using a live register to reload into
> Oh, I am using quite a new version of the compiler - rev 129547, > DATESTAMP 20071022. OK. AFAICS there is nothing glaring in the RTL you posted so you'll have to put a watchpoint and find out who has set reg_rtx for this particular reload. -- Eric Botcazou
Re: Target specific attributes to variables
> Even though the other 2 addressing modes are implemented, the > attributes could not be checked in the other 2 modes. These 2 modes > are "disp with register" and "register indirect" addressing modes. The > tree structure in these addressing modes could not be checked for > attributes using the RTX of the operand. We were unable to get any > information from other target specific attributes. Look up MEM_EXPR in the internals. You might want to use that for the register indirect case. cheers! Pranav
Re: About VLIW backend
> Did you test for large programs? Such as applications from SPEC 2006? or > the equal size of programs. Thanks. Oh no, we didnt. We stopped when we achieved fair stability purely on the basis of the number of testsuite failures ( less than 100). cheers! Pranav
Re: Reload using a live register to reload into
> OK. AFAICS there is nothing glaring in the RTL you posted so you'll have to > put a watchpoint and find out who has set reg_rtx for this particular reload. reg_rtx gets set due to a call to choose_reload_regs which in turn calls allocate_reload_reg to set reg_rtx. Also, just to confirm if I am on the right track, shouldnt the bit for reg #1 (i.e $c1) be set in live_throughout in the insn chain for insn #91 ( reproduced below for convenience ) ? (call_insn:HI 91 270 92 5 cor_h.c:129 (parallel [ (set (reg:SI 1 $c1) (call (mem:SI (symbol_ref:SI ("DotProductWithoutShift") [flags 0x41] ) [0 S4 A32]) (const_int 0 [0x0]))) (use (const_int 0 [0x0])) (clobber (reg:SI 31 $link)) ]) 42 {*call_value_direct} (expr_list:REG_DEAD (reg:SI 4 $c4) (expr_list:REG_DEAD (reg:SI 3 $c3 [ ivtmp.103 ]) (expr_list:REG_DEAD (reg:SI 2 $c2 [ h ]) (nil (expr_list:REG_DEP_TRUE (use (reg:SI 4 $c4)) (expr_list:REG_DEP_TRUE (use (reg:SI 3 $c3 [ ivtmp.103 ])) (expr_list:REG_DEP_TRUE (use (reg:SI 2 $c2 [ h ])) (expr_list:REG_DEP_TRUE (use (reg:SI 1 $c1 [ ivtmp.101 ])) (nil)) TIA, Pranav
Re: Progress on GCC plugins ?
David Edelsohn wrote: Dave Korn writes: Dave> I don't understand: why wouldn't designing it so that they have to be Dave> implemented as DSOs and hence are covered by the Dave> anything-directly-linked-in-is-GPL'd clause do the job? Or is the concern Dave> that people will write trivial marshalling plugins and ship all the data out Dave> across a pipe or socket to some proprietary code and then only release source Dave> for their shim layers? The concern is the many forms of shim layers that possibly could be written more easily with a plug-in framework. there is also a difference in these two scenarios: 1. a) Company X writes a modification to GCC to generate special intermediate stuff with format Y. b) Company X writes propietary back end which can only read format Y stuff written by that modification. 2. Same as 1, except that the GCC project does step a) thus semi-standardizing the output. TO me it is pretty clear that these are very different situations from a license point of view. David
RE: Progress on GCC plugins ?
On 07 November 2007 16:41, Joe Buck wrote: > On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote: >> Is there any progress in the gcc-plugin project ? > > Non-technical holdups. RMS is worried that this will make it too easy > to integrate proprietary code directly with GCC. > > If proponents can come up with good arguments about how the plugin > project can be structured to avoid this risk, that would help. I don't understand: why wouldn't designing it so that they have to be implemented as DSOs and hence are covered by the anything-directly-linked-in-is-GPL'd clause do the job? Or is the concern that people will write trivial marshalling plugins and ship all the data out across a pipe or socket to some proprietary code and then only release source for their shim layers? cheers, DaveK -- Can't think of a witty .sigline today
Re: undocumented optimization options
Mark Mitchell <[EMAIL PROTECTED]> wrote on 05/11/2007 01:51:33: > Gerald Pfeifer wrote: > > On Thu, 1 Nov 2007, Janis Johnson wrote: > >> -fipa-cp steven > >> -fipa-matrix-reorg razya > >> -fipa-pure-const zadeck (enabled with -O) > >> -fipa-referencezadeck (enabled with -O) > >> -fipa-type-escape zadeck > >> -fvar-tracking-uninit ctice > >> I'll add documentation for ipa-cp and ipa-matrix-reorg as soon as Zadeck commits his changes to invoke.texi. Thanks, Razya > >> Is there a policy about whether an experimental option can be left > >> undocumented, or should it be documented with a statement that it is > >> experimental? > > > > I'd prefer the latter. > > I believe our policy to be that *all* command line options must be > clearly documented. The document can say that the option is > experimental, deprecated, or otherwise in danger of being removed or > changes, but we should document the option. > > If an option is only useful for developers, and we really think that > users should not be allowed to twiddle it, we should hide it under an > #ifdef. > > Thanks, > > -- > Mark Mitchell > CodeSourcery > [EMAIL PROTECTED] > (650) 331-3385 x713
GCC-4.x and 3.x for msp430
Hello All: This is a message already sent to the mspgcc-users mailing list, which deals with msp430 port of the gnu toolchain for this 8bit microcontrollers [1]. I now post it here because someone advised me that this was also a good place to ask for this information. I'm concerned about the status of the GCC-4.x status of the msp430 port. It has been clear to me that is not ready for production use, hence I understand that 3.x is. I'd like to know how much amount of work does the 4.x branch need to get it usable/production ready. Also what amount of this work is already usable from the one done in the 3.x branch. When I went to the CVS repository[2] to check out the progress of the 3.x branch I was surprised that the most recent changes have been done to the 3.3 version and that 3.4 version was lagging some important time behind. Could anyone explain why 3.3 have been chosen? and wether the changes are directly applicable to 3.4? Thanks and regards, [1] http://mspgcc.sf.net [2] http://sourceforge.net/cvs/?group_id=42303 -- Raúl Sánchez Siles
Re: Designs for better debug info in GCC (was: Re: [vta] don't let debug insns get in the way of simple vect reduction)
Hi, On Wed, 7 Nov 2007, Alexandre Oliva wrote: > > With the different approach I and Matz started (and to which we didn't > > yet spend enough time to get debug information actually output - but I > > hope we'll get there soon), on the tree level the extra information is > > stored in a bitmap per SSA_NAME (where necessary). > > This will fail on a very fundamental level. Consider code such as: > > f(int x, int y) { > int c; > /* other vars */ > > c = x; > do_something_with(c, ...); // doesn't touch x or y > > c = y; > do_something_else_with(c, ...); // doesn't touch x or y > } > > where do_something_*with are actually complex computations, be that > explicit code, be it macros or inlined functions. > > This can (and should) be trivially optimized to: > > f(int x, int y) { > /* other vars */ > > do_something_with(x, ...); // doesn't touch x or y > > do_something_else_with(y, ...); // doesn't touch x or y > } > > But now, if I 'print c' in a debugger in the middle of one of the > do_something_*with expansions, what do I get? > > With the approach I'm implementing, you should get x and y at the > appropriate points, even though variable c doesn't really exist any > more. > > With your approach, what will you get? x and y at the appropriate part. Whatever holds 'x' at a point (SSA name, pseudo or mem) will also mention that it holds 'c'. At a later point whichever holds 'y' will also mention in holds 'c' . > There isn't any assignment to x or y you could hook your notes to. But there are _places_ for x and y. Those places can and are also associated with c. > Even if you were to set up side representations to model the additional > variables that end up mapped to the incoming arguments, you'd have 'c' > in both, and at the entry point. How would you tell? I don't understand the question. Ciao, Michael.
Re: How to turn off NRVO in gcc
Joe Buck <[EMAIL PROTECTED]> writes: > On Wed, Nov 07, 2007 at 07:48:53AM -0800, Ian Lance Taylor wrote: > > "Debarshi Sanyal" <[EMAIL PROTECTED]> writes: > > > > >Is there any way to turn off "named return value optimization" > > > (NRVO) while compiling a C++ program with g++? > > > > This question is not appropriate for gcc@gcc.gnu.org, which is for > > developers of gcc. It is appropriate for [EMAIL PROTECTED] > > Please take any followups to that mailing list. Thanks. > > > > The answer to your question is no. g++ will always implement NRVO > > when possible. > > You forgot about -fno-elide-constructors , Ian. I've needed it in > the past to work around a bug in profiling; there's a PR for this. Ah, tricky. Thanks. Ian
Re: Designs for better debug info in GCC
On Nov 7, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: > Alexandre Oliva <[EMAIL PROTECTED]> writes: >> I've pondered both alternatives, and decided that the latter was the >> only testable path. If we had a reliable debug information tester, we >> could proceed incrementally with the first alternative; it might be >> viable, but I don't really see that it would make things any simpler. > It seems to me that this is a reason to write a reliable debug > information tester. Yep. This is in the roadmap. But it's not something that can be done with GCC alone. It's more of a "system" test, that will involve debuggers or monitoring tools. gdb, fryks, systemtap or some such come to mind. > Your approach gives you a point solution--did anything change > today--but it doesn't give us a maintenance solution--did anything > change over time? Actually, no, your assessment is incorrect. What I'm providing gives us means to test, at any point in time, that enabling debug information won't cause changes to the generated code. So far, code in the trunk only performs these comparisons within the GCC directory. And, nevertheless, patches that correct obvious divergences have been lingering for months. I have recently-posted patches that introduce means to test other host and target libraries. I still haven't written testsuite code to enable us to verify that debug information doesn't affect the generated code for existing tests, or for additional tests introduced for this very purpose, but this is in the roadmap. Of course, none of this guarantees that debug information is accurate or complete, it just helps ensure that -g won't change code generation. Testing more than this requires a tool that can not only interpret debug information, but also the generated code, and verify that they match. The plan is to use the actual processors (or simulators) to understand the generated code, and existing debug info consumers that are debugging or monitoring tools to verify that debug info reflects the behavior observed by the processor. > While I understand that you were given certain requirements, for the > purposes of mainline gcc we need to weigh costs and benefits. How > many of our users are looking for precise debugging of optimized code, > and how much are they willing to pay for that? Will our users overall > be better served by the 90% solution? Does it really matter? Do we compromise standards compliance (and so violently, while at that) in any aspect of the compiler? What do we tell the growing number of users who don't regard debug information as something useless except for occasional debugging? That GCC cares about standards compliant except for debug information, and they should write their own Free Software compiler if they want a correct, standards-compliant compiler? Do we accept taking shortcuts for optimizations or other code generation issues when they cause incorrect code to be produced? Why should the mantra "must not sacrifice correctness" not applicable to debug information standards in GCC? At this point, debug information is so bad that it's a shame that most builds are done with -O2 -g: we're just wasting CPU cycles and disk space, contributing to accelerate the thermodynamic end of the universe (nevermind the Kyoto protocol ;-), for information that is severely incomplete at best, and terribly broken at worst. Yes, generating correct code may take some more memory and some more CPU cycles. Have we ever made a decision to use less memory or CPU cycles when the result is incorrect code? Why should standardized meta-information about the generated code be any different? >> 1. every single gimple assignment grows by one word, I take this back, I'd been misled by richi's description. It's really a side hashtable (which gets me worried about the re-emitted rather than modified gimple assignments in some locations), so it doesn't waste memory for gimple assignments that don't refer to user variables. Unfortunately, this is not the case for rtx SETs, in this alternate approach. > I don't know what the best approach is for improving debug > information. Your phrasing seems to indicate you're not concerned about fixing debug information, but rather only about making it less broken. With different goals, we can come to very different solutions. > But I think we've learned over time that explicit NOTEs > in the RTL was not, in general, a good idea. They complicate > optimizations and they tend to get left behind when moving code. Being left behind is actually a feature. It's one of the reasons why I chose this representation. The debug annotation is not supposed to move along with the SET, because it would then no longer model the source code, it would rather be mangled, often beyond recognition, because of implementation details. As for complicating optimizations, I can have some sympathy for that. Sure, generating code without preserving the information needed to m
How to turn off NRVO in gcc
Hi, Is there any way to turn off "named return value optimization" (NRVO) while compiling a C++ program with g++? Please reply. This is very urgent. Regards, Debarshi
Dependency check between instructions?
Hello, I am wondering whether there is a simple function/interface in GCC to check whether one instruction is dependent on the other (direct and indirect). It would be very useful for my target-specific optimizing passes, and I could not find in the GCC internal manual and GCC source code. Thanks in advance. Bingfeng Mei Broadcom UK
Re: undocumented optimization options
Razya Ladelsky wrote: > Mark Mitchell <[EMAIL PROTECTED]> wrote on 05/11/2007 01:51:33: > > >> Gerald Pfeifer wrote: >> >>> On Thu, 1 Nov 2007, Janis Johnson wrote: >>> -fipa-cp steven -fipa-matrix-reorg razya -fipa-pure-const zadeck (enabled with -O) -fipa-referencezadeck (enabled with -O) -fipa-type-escape zadeck -fvar-tracking-uninit ctice > > I'll add documentation for ipa-cp and ipa-matrix-reorg as soon as > Zadeck commits his changes to invoke.texi. > > Thanks, > Razya > > > Is there a policy about whether an experimental option can be left undocumented, or should it be documented with a statement that it is experimental? >>> I'd prefer the latter. >>> >> I believe our policy to be that *all* command line options must be >> clearly documented. The document can say that the option is >> experimental, deprecated, or otherwise in danger of being removed or >> changes, but we should document the option. >> >> If an option is only useful for developers, and we really think that >> users should not be allowed to twiddle it, we should hide it under an >> #ifdef. >> >> Thanks, >> >> -- >> Mark Mitchell >> CodeSourcery >> [EMAIL PROTECTED] >> (650) 331-3385 x713 >> > > i am waiting for an approval, hint hint hint. Kenny
Re: Designs for better debug info in GCC (was: Re: [vta] don't let debug insns get in the way of simple vect reduction)
Alexandre Oliva <[EMAIL PROTECTED]> writes: > I've pondered both alternatives, and decided that the latter was the > only testable path. If we had a reliable debug information tester, we > could proceed incrementally with the first alternative; it might be > viable, but I don't really see that it would make things any simpler. It seems to me that this is a reason to write a reliable debug information tester. Your approach gives you a point solution--did anything change today--but it doesn't give us a maintenance solution--did anything change over time? > Since one of the requirements I was given was that debug information > be correct (as in, if I don't know where a variable is, debug > information must say so, rather than say the variable is somewhere it > really isn't), going without additional annotations just wouldn't > work. Therefore, I figured I'd have to bite the bullet and take the > longer path, even though I don't dispute that it is possible to > achieve many improvements with the simpler approach. While I understand that you were given certain requirements, for the purposes of mainline gcc we need to weigh costs and benefits. How many of our users are looking for precise debugging of optimized code, and how much are they willing to pay for that? Will our users overall be better served by the 90% solution? > 1. every single gimple assignment grows by one word, to hold the > pointer to the bitmap. But most gimple assignments are to temporaries > variables, and these don't need annotations. Creating different kinds > of gimple assignments would make things quite complex, so I'd rather > not go down that path. So, you'd use a lot more memory, even when the > feature is not in use at all, and you might likely use more memory > than adding separate notes for user assignments like I do. And this > doesn't even count the actual bitmaps. I expect that most compilations are with -g, so I think we need to compare memory usage between the two approaches with -g. I don't know what the best approach is for improving debug information. But I think we've learned over time that explicit NOTEs in the RTL was not, in general, a good idea. They complicate optimizations and they tend to get left behind when moving code. We've fixed many many bugs and misoptimizations over the years due to NOTEs. I'm concerned that adding DEBUG_INSN in RTL repeats a mistake we've made in the past. Ian
Re: Dependency check between instructions?
Bingfeng Mei wrote: Hello, I am wondering whether there is a simple function/interface in GCC to check whether one instruction is dependent on the other (direct and indirect). It would be very useful for my target-specific optimizing passes, and I could not find in the GCC internal manual and GCC source code. Thanks in advance. IMHO, your best shot is sched-deps.c - infrastructure to calculate instruction dependencies for scheduling pass. Though there is no a user friendly interface to that infrastructure, it's not that difficult to write one. sched-deps.c gathers information about processed instructions into dependence context (struct deps) as it analyzes instruction stream one insn at a time. Basically it 1. analyzes the insn 2. creates dependencies for it upon previously analyzed insns 3. and adds information about insn to the context (so that next insn will receive proper dependencies too) A sample function that check if two insns are dependent may look like this: bool insns_dep_p (rtx pro, rtx con) { struct deps _deps, *deps = &_deps; bool res; /* Initialize context. */ init_deps (deps); /* Add information about producer to the context. */ sched_analyze_insn (deps, pro); /* Create dependencies between pro and con. */ sched_analyze_insn (deps, con); /* Check if con has at least one dep. */ res = (get_lists_size (con, ALL_DEPS) != 0); /* Remove dependencies from con. */ remove_all_deps (con); return res; } It is not necessary to actually create any dependencies, for an example of using sched-deps.c to analyze dependencies between instructions (and their LHS and RHS) without creating dependency graph, see sched-deps.c in sel-sched-branch. -- Maxim
Re: Progress on GCC plugins ?
> Dave Korn writes: Dave> I don't understand: why wouldn't designing it so that they have to be Dave> implemented as DSOs and hence are covered by the Dave> anything-directly-linked-in-is-GPL'd clause do the job? Or is the concern Dave> that people will write trivial marshalling plugins and ship all the data out Dave> across a pipe or socket to some proprietary code and then only release source Dave> for their shim layers? The concern is the many forms of shim layers that possibly could be written more easily with a plug-in framework. David
Re: Progress on GCC plugins ?
On Nov 7, 2007, at 9:28 AM, Robert Dewar wrote: Tom Tromey wrote: First, aren't we already in this situation? There are at least 2 compilers out there that re-use parts of GCC by serializing trees and then reading them into a different back end. It's not obvious to me that this is consistent with the GPL .. interesting issue ... Assuming these projects are "linking" it in, it just means the resulting compiler is also be GPL. That seems consistent. -Chris
Re: Progress on GCC plugins ?
Tom Tromey wrote: First, aren't we already in this situation? There are at least 2 compilers out there that re-use parts of GCC by serializing trees and then reading them into a different back end. It's not obvious to me that this is consistent with the GPL .. interesting issue ... Second, how does the LTO project avoid this same worry? Tom
Re: Progress on GCC plugins ?
Joe Buck wrote: On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote: Is there any progress in the gcc-plugin project ? Non-technical holdups. RMS is worried that this will make it too easy to integrate proprietary code directly with GCC. If proponents can come up with good arguments about how the plugin project can be structured to avoid this risk, that would help. I thought (from my memory and understanding of discussions at last GCC summit in July 2O07 in Ottawa) that a major argument against this risk is that there is no stability in the API offered by GCC to plugins. So any proprietary plugin would be a nightmare to maintain... -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
Re: Progress on GCC plugins ?
On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote: > Is there any progress in the gcc-plugin project ? Non-technical holdups. RMS is worried that this will make it too easy to integrate proprietary code directly with GCC. If proponents can come up with good arguments about how the plugin project can be structured to avoid this risk, that would help.
Re: How to turn off NRVO in gcc
"Debarshi Sanyal" <[EMAIL PROTECTED]> writes: >Is there any way to turn off "named return value optimization" > (NRVO) while compiling a C++ program with g++? This question is not appropriate for gcc@gcc.gnu.org, which is for developers of gcc. It is appropriate for [EMAIL PROTECTED] Please take any followups to that mailing list. Thanks. The answer to your question is no. g++ will always implement NRVO when possible. Ian
Re: How to turn off NRVO in gcc
On Wed, Nov 07, 2007 at 07:48:53AM -0800, Ian Lance Taylor wrote: > "Debarshi Sanyal" <[EMAIL PROTECTED]> writes: > > >Is there any way to turn off "named return value optimization" > > (NRVO) while compiling a C++ program with g++? > > This question is not appropriate for gcc@gcc.gnu.org, which is for > developers of gcc. It is appropriate for [EMAIL PROTECTED] > Please take any followups to that mailing list. Thanks. > > The answer to your question is no. g++ will always implement NRVO > when possible. You forgot about -fno-elide-constructors , Ian. I've needed it in the past to work around a bug in profiling; there's a PR for this.
Re: Progress on GCC plugins ?
> "Joe" == Joe Buck <[EMAIL PROTECTED]> writes: Joe> On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote: >> Is there any progress in the gcc-plugin project ? Joe> Non-technical holdups. RMS is worried that this will make it too easy Joe> to integrate proprietary code directly with GCC. Joe> If proponents can come up with good arguments about how the plugin Joe> project can be structured to avoid this risk, that would help. First, aren't we already in this situation? There are at least 2 compilers out there that re-use parts of GCC by serializing trees and then reading them into a different back end. Second, how does the LTO project avoid this same worry? Tom
Re: Designs for better debug info in GCC
On Nov 7, 2007, Michael Matz <[EMAIL PROTECTED]> wrote: > On Wed, 7 Nov 2007, Alexandre Oliva wrote: >> This will fail on a very fundamental level. Consider code such as: >> >> f(int x, int y) { int c; /* other vars */ >> c = x; do_something_with(c, ...); // doesn't touch x or y >> c = y; do_something_else_with(c, ...); // doesn't touch x or y >> This can (and should) be trivially optimized to: >> >> f(int x, int y) { /* other vars */ >> do_something_with(x, ...); // doesn't touch x or y >> do_something_else_with(y, ...); // doesn't touch x or y >> >> But now, if I 'print c' in a debugger in the middle of one of the >> do_something_*with expansions, what do I get? >> >> With the approach I'm implementing, you should get x and y at the >> appropriate points, even though variable c doesn't really exist any >> more. >> >> With your approach, what will you get? > x and y at the appropriate part. Whatever holds 'x' at a point (SSA name, > pseudo or mem) will also mention that it holds 'c'. At a later point > whichever holds 'y' will also mention in holds 'c' . I.e., there will be two parallel locations throughout the entire function that hold the value of 'c'. Something like: f(int x /* but also c */, int y /* but also c */) { /* other vars */ do_something_with(x, ...); // doesn't touch x or y do_something_else_with(y, ...); // doesn't touch x or y Now, what will you get if you 'print c' in the debugger (or if any other debug info evaluator needs to tell what the value of user variable c is) at a point within do_something_with(c,...) or do_something_else_with(c)? Now consider that f is inlined into the following code: int g(point2d p) { /* lots of code */ f(p.x, p.y); /* more code */ f(p.y, p.x); /* even more code */ } g gets fully scalarized, so, before inlining, we have: int g(point2d p) { int p$x = p.x, int p$y = p.y; /* lots of code */ f(p$x, p$y); /* more code */ f(p$y, p$x); /* even more code */ } after inlining of f, we end up with: int g(point2d p) { int p$x = p.x, int p$y = p.y; /* lots of code */ { int f()::x.1 /* but also f()::c.1 */ = p$x, f()::y.1 /* but also f()::c.1 */ = p$y; { /* other vars */ do_something_with(f()::x.1, ...); // doesn't touch x or y do_something_else_with(f()::y.1, ...); // doesn't touch x or y } } /* more code */ { int f()::x.2 /* but also f()::c.2 */ = p$x, f()::y.2 /* but also f()::c.2 */ = p$y; { /* other vars */ do_something_with(f()::x.2, ...); // doesn't touch x or y do_something_else_with(f()::y.2, ...); // doesn't touch x or y } } /* even more code */ } then, we further optimize g and get: int g(point2d p) { int p$x /* but also f()::x.1, f()::c.1, f()::y.2, f()::c.2 */ = p.x; int p$y /* but also f()::y.1, f()::c.1, f()::x.2, f()::c.2 */ = p.y; /* lots of code */ { { /* other vars */ do_something_with(p$x, ...); // doesn't touch x or y do_something_else_with(p$y, ...); // doesn't touch x or y } } /* more code */ { { /* other vars */ do_something_with(p$y, ...); // doesn't touch x or y do_something_else_with(p$x, ...); // doesn't touch x or y } } /* even more code */ } and now, if you try to resolve the variable name 'c' to a location or a value within any of the occurrences of do_something_*with(), what do you get? What ranges do you generate for each of the variables involved? >> There isn't any assignment to x or y you could hook your notes to. > But there are _places_ for x and y. Those places can and are also > associated with c. This just goes to show that there's a fundamental mistake in the mapping. Instead of mapping user-level concepts to implementation concepts, which is what debug information is meant to do, you're mapping implementation details to user-level concepts. Unfortunately, this mapping is not biunivocal. The chosen representation is fundamentally lossy. It can't possibly get you accurate debug information. And the above is just an initial example of the loss of information that will lead to *incorrect* debug information, which is far worse than *incomplete* information. >> Even if you were to set up side representations to model the additional >> variables that end up mapped to the incoming arguments, you'd have 'c' >> in both, and at the entry point. How would you tell? > I don't understand the question. See the discussion about resolving 'c' above. -- Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/ FSF Latin America Board Member http://www.fsfla.org/ Red Hat Compiler Engineer [EMAIL PROTECTED], gcc.gnu.org} Free Software Evangelist [EMAIL PROTECTED], gnu.org}
Re: Progress on GCC plugins ?
>> The concern is the many forms of shim layers that possibly could >> be written more easily with a plug-in framework. > > there is also a difference in these two scenarios: > > 1. a) Company X writes a modification to GCC to generate special > intermediate stuff with format Y. > > b) Company X writes propietary back end which can only > read format Y stuff written by that modification. > > 2. Same as 1, except that the GCC project does step a) > thus semi-standardizing the output. > > TO me it is pretty clear that these are very different > situations from a license point of view. I do exactly point 1 for my (Open Source) C++ exception static analysis tool: http://edoc.sourceforge.net/ I need a subset of the data in a format that is easy for me to process and it must stay around for a long period of time so i dont want all the information that GCC could generate in a more generic form. So in this case i feel a non-standardized format is best suited for my project. This same argument would apply for other projects too, though i can see how it can be helpful to have a single more generic format. It just might not be suitable in a lot of cases. If you already have a plugin distributed with GCC that writes in a generic format, then most people will use that anyway if they can, rather than write and maintain their own plugin. This is a problem with or without the plugin framework. By adding plugins you have the same issues as before but just make it easier for all developers whether open source or proprietary to develop new GCC functionality. My project is an example of using a patch against GCC in order to achieve the "shim layer". I would much prefer to do this with a plugin, but a lack of the plugin framework is not going to stop me doing it whatever way i can. Brendon.
Re: Fw: error: array type has incomplete element type ??
On 07/11/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > > This is a part of the code : > -- > extern struct dummy temp[]; > error: array type has incomplete element type > -- This is a question about how to use GCC, not about development of GCC, so it should be sent to [EMAIL PROTECTED] Regards, Jon
Re: Progress on GCC plugins ?
Brendon Costa wrote: The concern is the many forms of shim layers that possibly could be written more easily with a plug-in framework. there is also a difference in these two scenarios: 1. a) Company X writes a modification to GCC to generate special intermediate stuff with format Y. b) Company X writes propietary back end which can only read format Y stuff written by that modification. 2. Same as 1, except that the GCC project does step a) thus semi-standardizing the output. TO me it is pretty clear that these are very different situations from a license point of view. I do exactly point 1 for my (Open Source) C++ exception static analysis tool: http://edoc.sourceforge.net/ Well assuming that your tool is GPL'ed there is no issue and this is not a case of 1 above!
Re: Reload using a live register to reload into
> Also, just to confirm if I am on the right track, shouldnt the bit for > reg #1 (i.e $c1) be set in live_throughout in the insn chain for > insn #91 ( reproduced below for convenience ) ? > > (call_insn:HI 91 270 92 5 cor_h.c:129 (parallel [ >(set (reg:SI 1 $c1) >(call (mem:SI (symbol_ref:SI > ("DotProductWithoutShift") [flags 0x41] DotProductWithoutShift>) [0 S4 A32]) >(const_int 0 [0x0]))) >(use (const_int 0 [0x0])) >(clobber (reg:SI 31 $link)) >]) 42 {*call_value_direct} (expr_list:REG_DEAD (reg:SI 4 $c4) >(expr_list:REG_DEAD (reg:SI 3 $c3 [ ivtmp.103 ]) >(expr_list:REG_DEAD (reg:SI 2 $c2 [ h ]) >(nil >(expr_list:REG_DEP_TRUE (use (reg:SI 4 $c4)) >(expr_list:REG_DEP_TRUE (use (reg:SI 3 $c3 [ ivtmp.103 ])) >(expr_list:REG_DEP_TRUE (use (reg:SI 2 $c2 [ h ])) >(expr_list:REG_DEP_TRUE (use (reg:SI 1 $c1 [ ivtmp.101 ])) >(nil)) I don't think so, it should be in dead_or_set, the value contained in $c1 dies in the insn. -- Eric Botcazou
Re: Progress on GCC plugins ?
Robert Dewar wrote: > Brendon Costa wrote: The concern is the many forms of shim layers that possibly could be written more easily with a plug-in framework. >>> there is also a difference in these two scenarios: >>> >>> 1. a) Company X writes a modification to GCC to generate special >>> intermediate stuff with format Y. >>> >>> b) Company X writes propietary back end which can only >>> read format Y stuff written by that modification. >>> >>> 2. Same as 1, except that the GCC project does step a) >>> thus semi-standardizing the output. >>> >>> TO me it is pretty clear that these are very different >>> situations from a license point of view. >> >> I do exactly point 1 for my (Open Source) C++ exception static analysis >> tool: >> http://edoc.sourceforge.net/ > > Well assuming that your tool is GPL'ed there is no issue and > this is not a case of 1 above! > The patch against GCC is GPL, the main library that is capable of manipulating the data exported by the patched GCC is LGPL and could theoretically be under any license. What i was trying to point out is that proprietary projects can already (without plugins) make exporters for GCC which are GPL and then create the majority of their code in other closed source apps that use that data. I don't see plugins as changing this except to make it easier. Which is really a major reason for proving plugins isn't it? Making it easier to provide additional GCC features? My project was given as an example of how it could be done currently without plugins even though this project is open source. Brendon.
Re: Designs for better debug info in GCC
On Wed, Nov 07, 2007 at 02:56:24PM -0800, Ian Lance Taylor wrote: > At one time, gcc actually provided better debugging of optimized code > than any other compiler, though I don't know if that is still true. > Optimized gcc code is still debuggable today. I do it all the time. > (For me poor support for debugging C++ is a much bigger issue, though > I think that is an issue more with gdb than with gcc.) We're working on both of these on the GDB side. > gcc's users are definitely calling for a faster compiler. Are they > calling for better debuggability of optimized code? In my experience, yes. CodeSourcery has work currently being contributed to GDB that makes this quite a lot better; we also occasionally have customers ask us about further improvements. And I file bugs about this from time to time, most of which are still open. > As I understand your proposal, it materializes variables which were > otherwise omitted from the generated program. It doesn't address the > other issues with debugging optimized code, like bouncing around > between program lines. Is that correct? What else does your proposal > do? I've been thinking about the bouncing problem quite a bit lately. I have some rough ideas, but I won't draw out this thread by sharing :-) -- Daniel Jacobowitz CodeSourcery
Re: Designs for better debug info in GCC
Alexandre Oliva <[EMAIL PROTECTED]> writes: > > Your approach gives you a point solution--did anything change > > today--but it doesn't give us a maintenance solution--did anything > > change over time? > > Actually, no, your assessment is incorrect. Ah, you're right. I was wrong. > > While I understand that you were given certain requirements, for the > > purposes of mainline gcc we need to weigh costs and benefits. How > > many of our users are looking for precise debugging of optimized code, > > and how much are they willing to pay for that? Will our users overall > > be better served by the 90% solution? > > Does it really matter? Do we compromise standards compliance (and so > violently, while at that) in any aspect of the compiler? What standards are you talking about? I'm not aware of any standard for debuggability of optimized code. At one time, gcc actually provided better debugging of optimized code than any other compiler, though I don't know if that is still true. Optimized gcc code is still debuggable today. I do it all the time. (For me poor support for debugging C++ is a much bigger issue, though I think that is an issue more with gdb than with gcc.) gcc's users are definitely calling for a faster compiler. Are they calling for better debuggability of optimized code? > >> 1. every single gimple assignment grows by one word, > > I take this back, I'd been misled by richi's description. It's really > a side hashtable (which gets me worried about the re-emitted rather > than modified gimple assignments in some locations), so it doesn't > waste memory for gimple assignments that don't refer to user > variables. > > Unfortunately, this is not the case for rtx SETs, in this alternate > approach. Obviously the memory requirements of both approaches will need to be measured. > > We've fixed many many bugs and misoptimizations over the years due to > > NOTEs. I'm concerned that adding DEBUG_INSN in RTL repeats a mistake > > we've made in the past. > > That's a valid concern. However, per this reasoning, we might as well > push every operand in our IL to separate representations, because > there have been so many bugs and misoptimizations over the years, > especially when the representation didn't make transformations > trivially correct. Please don't use strawman arguments. As I understand your proposal, it materializes variables which were otherwise omitted from the generated program. It doesn't address the other issues with debugging optimized code, like bouncing around between program lines. Is that correct? What else does your proposal do? Ian
Re: Progress on GCC plugins ?
Brendon Costa wrote: The patch against GCC is GPL, the main library that is capable of manipulating the data exported by the patched GCC is LGPL and could theoretically be under any license. Whose theory? You don't know that! What i was trying to point out is that proprietary projects can already (without plugins) make exporters for GCC which are GPL and then create the majority of their code in other closed source apps that use that data. You don't know that! I don't see plugins as changing this except to make it easier. Which is really a major reason for proving plugins isn't it? Making it easier to provide additional GCC features? No, it has other changes, since it establishes a standardized interface. In my view (from the point of view of an expert witness in copyright matters), this *does* make a big difference from a licensing point of view. Of course that's just my view, I don't know either, but I know that I don't know :-) My project was given as an example of how it could be done currently without plugins even though this project is open source. The issue is not can it be done, but what are the licensing issues? And without litigation, no one really knows. Brendon.
gcc-4.2-20071107 is now available
Snapshot gcc-4.2-20071107 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.2-20071107/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.2 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_2-branch revision 129973 You'll find: gcc-4.2-20071107.tar.bz2 Complete GCC (includes all of below) gcc-core-4.2-20071107.tar.bz2 C front end and core compiler gcc-ada-4.2-20071107.tar.bz2 Ada front end and runtime gcc-fortran-4.2-20071107.tar.bz2 Fortran front end and runtime gcc-g++-4.2-20071107.tar.bz2 C++ front end and runtime gcc-java-4.2-20071107.tar.bz2 Java front end and runtime gcc-objc-4.2-20071107.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.2-20071107.tar.bz2The GCC testsuite Diffs from 4.2-20071031 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.2 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Attributes on structs
There are several outstanding bugs (29436, 28560, 28834, possibly others) having to do with applying attributes to uses of structs in typedefs or other places other than the point of definition. The problem with doing this is that the attribute code makes a new TYPE_MAIN_VARIANT, so we basically get a new type which shares the name and members of the original type. This causes trouble for the C++ type system and other parts of the compiler that assume that all variants of a class with a particular name have the same TYPE_MAIN_VARIANT. I notice that the GCC manual only says that type attributes can be applied in a typedef or type definition, but in fact the compiler allows them to be applied anywhere. Sometimes the attributes are intended to create new types (vector), but usually not; the modified types are intended to work like the base type. Sometimes the attribute has important implications for code generation (aligned), sometimes it doesn't (unused). For the cases that have important implications for code generation, we need the TYPE_MAIN_VARIANT trick because a lot of places in the compiler assume that they can just take TYPE_MAIN_VARIANT to strip cv-quals, and we don't want that to discard important attributes. One solution to this issue would be to simply disallow attributes on structs after the definition. Failing that, we need to define how they interact with the type system. Opinions? Jason
Re: Progress on GCC plugins ?
If you want to have a legal discussion, please take this conversation somewhere else. Thanks, David
Re: Progress on GCC plugins ?
Robert Dewar wrote: > Brendon Costa wrote: > >> The patch against GCC is GPL, the main library that is capable of >> manipulating the data exported by the patched GCC is LGPL and could >> theoretically be under any license. > > Whose theory? You don't know that! I thought it was obvious :-) My Theory... A theory is not necessarily true... I will clarify where i am coming from... I am a software developer and know very little about the legal side of things. What i have said is really the way i understand things as i have tried ti have a basic idea of what all this entails and how it affects my work. If the license extends to the data generated by GPL apps (Which is really what I think we are talking about), then wouldn't the binaries or for that matter the intermediate source files generated by GCC (for example using g++ -save-temps) also be covered under the GPL regardless of the license of the source being compiled? This data could include: * object files * intermediate source files (.i, .s) * pre-compiled header files * any other form of the original source serialized into some specific format such as GIMPLE exports etc. The problem with this is that each of these is really just a different representation of the original source code. This complicates matters even more if we have both the GPL of GCC and the license of the original source code impacting on what is generated. Wouldn't it mean that only certain types of licensed source code are allowed to be compiled with GCC? Where do you draw the limit of what is covered and what is not? It seems to me from what you have said, nothing is safe from the GPL. If an OS like BSD compiles itself with GCC and this mandates that anything which uses that data must also be licensed under GPL, then they have no choice but to say only GPL code is allowed to run on this OS. I dont see that as being the current common view. Most of the BSD's have a non-GPL license and still use GCC to compile. Or are you saying that the FORMAT of the data exported is covered by GPL, not the data itself? Does that mean if you design a particular data format in a GPL app, that you are not allowed to write a non-GPL application that uses data in that format? Does this also apply the other way around? I dont know if microsoft have some sort of license over the PExe formation for binaries. But if so, then is GCC actually ALLOWED to export data in that format? Look at GIF. Then i guess it is worth asking, does the GPL applied to the code of the plugin automatically then apply to the format of the data exported? Sorry for the long email. I am just trying to understand the issues involved. >> >> I don't see plugins as changing this except to make it easier. Which >> is really a major reason for proving plugins isn't it? Making it >> easier to provide additional GCC features? > > No, it has other changes, since it establishes a standardized > interface. In my view (from the point of view of an expert > witness in copyright matters), this *does* make a big difference > from a licensing point of view. Of course that's just my view, > I don't know either, but I know that I don't know :-) Are we talking about the standardized interface provided by GCC that people can use to write plugins for? If so i would agree that anything which uses this interface must be GPL. So the code for a plugin of GCC must also be GPL. If talking about the "interface" that is generated by the export of data in a particular format. I still dont see how that affects the tools that make use of that data format unless the particular format has a license imposed on it. Again, this is all just my view and I am *NOT* an expert in this field. In fact, i have had almost NO experience in legal areas. I am curious to know where i have mis-understood the application of the GPL to the GCC project and how that might apply to others like my EDoc++ project. Thanks, Brendon.
Re: Progress on GCC plugins ?
David Edelsohn wrote: > If you want to have a legal discussion, please take this > conversation somewhere else. > > Thanks, David > Sorry. I just posted another email before i got this. Is there a suitable place to move this discussion besides private emails? Thanks, Brendon.
Re: Attributes on structs
On Wed, 7 Nov 2007, Mark Mitchell wrote: > Jason Merrill wrote: > > > One solution to this issue would be to simply disallow attributes on > > structs after the definition. Failing that, we need to define how they > > interact with the type system. Opinions? > > We had a discussion about this a while back: > > http://gcc.gnu.org/ml/gcc/2006-10/msg00318.html > > Roughly speaking, it seemed like my proposal was approximately > acceptable to those who commented -- except that it seemed that it > actually allowed a bit more than we need to do so. In particular, my > reading of that thread is that people seemd to agree we could simply > disallow attributes after the definition, which is fine by me. There's an additional issue to deal with now: proposals to include some form of attributes in C++0x and C1x and any semantics those may define. None of the proposals I've seen seem to do much about addressing the type system issues. I critiqued incompatibilities in one proposal in WG14 N1259. WG21 N2466 (a document arising from the WG14 Kona meeting that I presume will appear in the WG14 post-Kona mailing as well as the WG21 one) gives some WG14 views from Kona but doesn't seem to address type system issues. -- Joseph S. Myers [EMAIL PROTECTED]
Re: Designs for better debug info in GCC
David Edelsohn wrote: >> Mark Mitchell writes: > > Mark> I think we all agree that providing better debugging of optimized code > Mark> is a priori a good thing. So, as I see it, this thread is focused on > Mark> what internal representation we might use for that. > > Yes, it is a good thing, but not at any price. Regardless of the > representation and implementation, there is a cost. This discussion > should not start with the premise that better debugging of optimized code > is better at any cost. I agree. You're right to state this explicitly, but I'd implicitly expected that we'd do cost/benefit analysis on this feature, as we would any other. > Mark> I'd like to start by > Mark> capturing the functional changes that we want to make to GCC's debug > Mark> output -- not the changes that we want in the debug experience, or > Mark> changes that we need in GDB, but the changes in the generated DWARF. > > Who is "we"? What better debugging are GCC users demanding? What > debugging difficulties are they experiencing? Who is that set of users? > What functional changes would improve those cases? What is the cost of > those improvements in complexity, maintainability, compile time, object > file size, GDB start-up time, etc.? That's what I'm asking. First and foremost, I want to know what, concretely, Alexandre is trying to achieve, beyond "better debugging info for optimized code". Until we understand that, I don't see how we can sensibly debate any methods of implementation, possible costs, etc. -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
Re: Progress on GCC plugins ?
Joe Buck wrote: > On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote: >> Is there any progress in the gcc-plugin project ? > > Non-technical holdups. RMS is worried that this will make it too easy > to integrate proprietary code directly with GCC. > > If proponents can come up with good arguments about how the plugin > project can be structured to avoid this risk, that would help. > Is there anything that still needs to be done on the technical side at all like testing or development? I am willing to help out a bit. I have an interest in seeing this in a future release :-) Thanks, Brendon.
Re: Designs for better debug info in GCC
Ian Lance Taylor wrote: > At one time, gcc actually provided better debugging of optimized code > than any other compiler, though I don't know if that is still true. > Optimized gcc code is still debuggable today. I do it all the time. > (For me poor support for debugging C++ is a much bigger issue, though > I think that is an issue more with gdb than with gcc.) I think we all agree that providing better debugging of optimized code is a priori a good thing. So, as I see it, this thread is focused on what internal representation we might use for that. I don't know that there's an abstract right answer to whether something NOTE-like or something on the side is better. There are problems with both approaches. We know the NOTE/DEBUG_INSN thing is going to break, from experience; we also know the on-the-side thing is going to be hard to maintain. Alexandre has clearly thought about this a lot. I'd like to start by capturing the functional changes that we want to make to GCC's debug output -- not the changes that we want in the debug experience, or changes that we need in GDB, but the changes in the generated DWARF. For example, I'm thinking of a series of function test cases. Ignore the substance of this example -- I'm making it up! -- I'm just trying to capture the form. === int main () { int i; i = 3; return i; } When optimizing, "i" is optimized away. The debug info for "i" right before the return statement says "i has been optimized away", but not what its value is. I think it should say that the value is "3". To do that, we need to emit a DW_Now_My_Value_is_3 tag for "i". === Now, how is whatever representation we pick going to get us that? Is the Oliva representation sufficient? What about the Guenther/Matz representation? Independently of the representation, what algorithms are we going to use to track whatever we need to track as the optimizers remove, insert, duplicate, and reorder code? Until we all know what we're trying to do, I don't see how we can make a good decision about the representation. Clearly, in the abstract, we can represent data either on-the-side or in the instruction stream, but until we know what output we want, I'm not sure how we can pick. -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
Re: Designs for better debug info in GCC
On Nov 7, 2007, Mark Mitchell <[EMAIL PROTECTED]> wrote: > First and foremost, I want to know what, concretely, Alexandre is > trying to achieve, beyond "better debugging info for optimized > code". I'm not really going for "better". I'm going for "correct" first, while making room for "better", and hopefully already getting better, in the process. -- Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/ FSF Latin America Board Member http://www.fsfla.org/ Red Hat Compiler Engineer [EMAIL PROTECTED], gcc.gnu.org} Free Software Evangelist [EMAIL PROTECTED], gnu.org}
Re: Designs for better debug info in GCC
On Nov 7, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: >> Does it really matter? Do we compromise standards compliance (and so >> violently, while at that) in any aspect of the compiler? > What standards are you talking about? Debug information standards such as DWARF-3. > I'm not aware of any standard for debuggability of optimized code. I'm talking about standards that specify how a compiler should encode meta-information about how source code concepts map to the code it generated. See, for example, section 2.6 in the Dwarf-3 specification. It talks very little about optimization, but it does discuss what a DW_AT_location, if present, means. It doesn't say anything like: "if a variable is available at a certain location most of the time, you can emit a DW_AT_location that refers to that location". It says: Debugging information must provide consumers a way to find the location of program variables, determine the bounds of dynamic arrays and strings, and possibly to find the base address of a subroutine’s stack frame or the return address of a subroutine See, it's not about debuggers, it's about consumers. It's an obligation, not really an option (that said, DW_AT_location *is* optional). 1. Location expressions, which are a language independent representation of addressing rules of arbitrary complexity built from DWARF expressions. They are sufficient for describing the location of any object as long as its lifetime is either static or the same as the lexical block that owns it, and it does not move throughout its lifetime. 2. Location lists, which are used to describe objects that have a limited lifetime or change their location throughout their lifetime. Nowhere does it state that, "if the compiler can't quite keep track of the location of a variable, it can be sloppy and emit just whatever is simpler or appears to make sense". Address ranges may overlap. When they do, they describe a situation in which an object exists simultaneously in more than one place. If all of the address ranges in a given location list do not collectively cover the entire range over which the object in question is defined, it is assumed that the object is not available for the portion of the range that is not covered. So, it does make room for *some* sloppiness, after all. That's what I refer to as "incompleteness of debug information". If we fail to keep track of where an object is, it's sort-of ok (although undesirable) to emit debug information that omits the location of the object in certain program regions where it might be live. However, it is not standard-compliant to emit information stating that the object is available at certain locations if it is NOT really there, or if it is available elsewhere, in addition to or instead of the locations we've emitted. That's what I refer to as "incorrectness of debug information". Incorrectness in the compiler output is always a bug. No matter how hard it is to implement, or how resource-intensive the solution is, arguing that we've made a trade-off and decided to generate wrong output for this case is a clever decision. Incompleteness is a completely different issue. This is where we *can* afford to make trade-offs. Just like we can decide to omit certain optimizations, or to not carry them out to the greatest possible extent, or to experiment with various different heuristics, we could afford to emit incomplete debug information, it's "just" a quality of implementation issue. But not incorrect debug information, that's just a bug. > gcc's users are definitely calling for a faster compiler. Are they > calling for better debuggability of optimized code? This is not just about debuggability, as I've tried to explain all the way from the beginning of the discussion, maybe a couple of months ago. Debug information is not just about debuggers any more. There are good reasons why the Dwarf-3 standard says "consumers" rather than "debuggers". It's no longer just a matter of convenience, recompile with -g0 if you want to debug it. It's a matter of correctness, for various monitoring tools now rely on this meta-information, and rightfully so. >> > We've fixed many many bugs and misoptimizations over the years due to >> > NOTEs. I'm concerned that adding DEBUG_INSN in RTL repeats a mistake >> > we've made in the past. >> >> That's a valid concern. However, per this reasoning, we might as well >> push every operand in our IL to separate representations, because >> there have been so many bugs and misoptimizations over the years, >> especially when the representation didn't make transformations >> trivially correct. > Please don't use strawman arguments. It's not, really. A reference to an object within a debug stmt or insn is very much like any other operand, in that most optimizer passes must keep them up to date. If you argue for pushing them outside the IL, why would any other oper
Re: Adding custom scheduler dependency between 2 insns
Tomer Benyamini wrote: Hi, I was wondering if it is possible to create a dependency between 2 insns through a specific scheduler hook (maybe through TARGET_SCHED_DEPENDENCIES_EVALUATION_HOOK) even though the insns are not really dependent (not really read-after-write etc.). If it is possible, what is the best way to do it? It sounds to me like you want a dirty hack. Anyway, call to sched-deps.c: sd_add_dep () should be enough. -- Maxim
Adding custom scheduler dependency between 2 insns
Hi, I was wondering if it is possible to create a dependency between 2 insns through a specific scheduler hook (maybe through TARGET_SCHED_DEPENDENCIES_EVALUATION_HOOK) even though the insns are not really dependent (not really read-after-write etc.). If it is possible, what is the best way to do it? Thanks, Tomer
RE: Adding custom scheduler dependency between 2 insns
Hi Maxim, Thanks for the prompt response! I'm using gcc 4.2.0, and couldn't locate sd_add_dep in sched-deps.c. Is add_dependence () an appropriate equivalent? Thanks, Tomer -Original Message- From: Maxim Kuvyrkov [mailto:[EMAIL PROTECTED] Sent: Thursday, November 08, 2007 09:30 To: Tomer Benyamini Cc: gcc@gcc.gnu.org Subject: Re: Adding custom scheduler dependency between 2 insns Tomer Benyamini wrote: > Hi, > > I was wondering if it is possible to create a dependency between 2 insns > through a specific scheduler hook (maybe through > TARGET_SCHED_DEPENDENCIES_EVALUATION_HOOK) even though the insns are not > really dependent (not really read-after-write etc.). If it is possible, > what is the best way to do it? It sounds to me like you want a dirty hack. Anyway, call to sched-deps.c: sd_add_dep () should be enough. -- Maxim
Re: Adding custom scheduler dependency between 2 insns
Tomer Benyamini wrote: Hi Maxim, Thanks for the prompt response! I'm using gcc 4.2.0, and couldn't locate sd_add_dep in sched-deps.c. Is add_dependence () an appropriate equivalent? Now quite. Try add_back_dep (); add_forw_dep (); -- Maxim
Re: Progress on GCC plugins ?
Joe Buck <[EMAIL PROTECTED]> writes: > On Wed, Nov 07, 2007 at 09:20:21AM +0100, Emmanuel Fleury wrote: > > Is there any progress in the gcc-plugin project ? > > Non-technical holdups. RMS is worried that this will make it too easy > to integrate proprietary code directly with GCC. > > If proponents can come up with good arguments about how the plugin > project can be structured to avoid this risk, that would help. It seems very likely that it would be possible to write a plugin which would generate IR to be fed into a proprietary backend. Sun's GCCfss is an existing example of what could be done in this area. Of course, GCCfss also shows that if people want to do this, they will. Adding a plugin framework doesn't even make it notably easier. Once you've written code to translate GIMPLE into some other IR, dropping it into gcc is a matter of changing a few lines. Providing a plugin interface is not the issue. More deeply, I think his concern is misplaced. I think that gcc has already demonstrated that the only widely used compilers are free software. Proprietary compilers don't keep up over time, outside of niche markets. Hooking proprietary code into gcc, one way or another, is just going to create a dead end for the people who do it. Certainly it's not a good thing, and certainly it would be preferable to prevent it if possible. But it is not the worst possible thing that could happen; it is merely a cost. I won't enumerate the benefits of plugins here. But it is clear to me that the benefits outweigh the costs. Ian
Re: Attributes on structs
Jason Merrill wrote: > One solution to this issue would be to simply disallow attributes on > structs after the definition. Failing that, we need to define how they > interact with the type system. Opinions? We had a discussion about this a while back: http://gcc.gnu.org/ml/gcc/2006-10/msg00318.html Roughly speaking, it seemed like my proposal was approximately acceptable to those who commented -- except that it seemed that it actually allowed a bit more than we need to do so. In particular, my reading of that thread is that people seemd to agree we could simply disallow attributes after the definition, which is fine by me. -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
Re: Designs for better debug info in GCC
> Mark Mitchell writes: Mark> I think we all agree that providing better debugging of optimized code Mark> is a priori a good thing. So, as I see it, this thread is focused on Mark> what internal representation we might use for that. Yes, it is a good thing, but not at any price. Regardless of the representation and implementation, there is a cost. This discussion should not start with the premise that better debugging of optimized code is better at any cost. Mark> I'd like to start by Mark> capturing the functional changes that we want to make to GCC's debug Mark> output -- not the changes that we want in the debug experience, or Mark> changes that we need in GDB, but the changes in the generated DWARF. Who is "we"? What better debugging are GCC users demanding? What debugging difficulties are they experiencing? Who is that set of users? What functional changes would improve those cases? What is the cost of those improvements in complexity, maintainability, compile time, object file size, GDB start-up time, etc.? David
Re: Designs for better debug info in GCC
On Nov 7, 2007, Mark Mitchell <[EMAIL PROTECTED]> wrote: > Until we all know what we're trying to do Here's what I am trying to do: 1. Ensure that, for every user variable for which we emit debug information, the information is correct, i.e., if it says the value of a variable at a certain instruction is at certain locations, or is a known constant, then the variable must not be at any other location at that point, and the locations or values must match reasonable expectations based on source code inspection. 2. Defining "reasonable expectations" is tricky, for code reordering typical of optimization can make room for numerous surprises. I don't have a precise definition for this yet, but very clearly to me saying that a variable holds a value that it couldn't possibly hold (e.g., because it is only assigned that value in a code path that is knowingly not taken) is a very clear indication that something is amiss. The general guiding rule is, if we aren't sure the information is correct (or we're sure it isn't), we shouldn't pretend that it is. 3. Try to ensure that, if the value of a variable is a known constant at a certain point in the program, this information is present in debug information. 4. Try to ensure that, if the value of a variable is available at any location at a certain point in the program, this information is present in debug information. 5. Stop missing optimizations for the sake of improving debug information. 6. Avoid using additional memory and CPU cycles that would be needed only for debug information when compiling without generating debug information -- Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/ FSF Latin America Board Member http://www.fsfla.org/ Red Hat Compiler Engineer [EMAIL PROTECTED], gcc.gnu.org} Free Software Evangelist [EMAIL PROTECTED], gnu.org}
Re: Designs for better debug info in GCC
On Nov 7, 2007, David Edelsohn <[EMAIL PROTECTED]> wrote: > Who is "we"? What better debugging are GCC users demanding? What > debugging difficulties are they experiencing? I, for one, miss the arguments of inlined functions, a lot. The reason for that is that arguments are currently optimized away to boot. Even if they weren't, since they're initialized with a trivial copy, at least their initial value (quite often preserved throughout compilation) would be gone to boot. On top of that, we currently regard arguments and variables of non-inlined functions as special, and we prevent a number of optimizations with them, in order to be able to generate slightly better debug information for them. (As for arguments and variables of inlined functions, we happily drop them on the floor right away.) This is not only inconsistent, it's also harmful, because we're trading performance and compile-time memory for slightly better but still incorrect, incomplete and unreliable debug information. > Who is that set of users? I'm personally getting numerous requests for debug information correctness and better completeness from debug info consumers such as gdb, frysk and systemtap. GCC's eagerness to inline functions, even ones never declared as inline, and its eagerness to corrupt the meta-information associated with them, causes these tools to malfunction in very many situations. And it's all GCC's fault, for generating code that is not standards-compliant in the meta-information sections of its output. > What functional changes would improve those cases? What is the cost of > those improvements in complexity, maintainability, compile time, object > file size, GDB start-up time, etc.? Before I spend hours describing the little I can foresee about this, how much of this really matters, given that it's mostly a matter of correctness, rather than mere trade offs? -- Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/ FSF Latin America Board Member http://www.fsfla.org/ Red Hat Compiler Engineer [EMAIL PROTECTED], gcc.gnu.org} Free Software Evangelist [EMAIL PROTECTED], gnu.org}