matching function for out_waiting in gcc 3.4.2
Hi, I have moved to gcc version 3.4.2(linux sll) So I am migrating a component to this version from gcc 2.96. In my existing code I am using the *out_waiting* function of the struct streambuf present in the streambuf.h file. But I can't find this function in this version of gcc 3.4.2. so can u help me in finding a matching function in this version. Thanks, Anandi
Re: Wiki pages on tests cases
On 11/27/05, Jonathan Wakely <[EMAIL PROTECTED]> wrote: > Yes, I know it's a wiki and I can do this myself, but I only have so > much spare time and maybe the second one was added for a good reason. http://en.wikipedia.org/wiki/Be_bold Works for them.
How implemented "typeof"
Hello! How I can know more about implementation at 'tree' level such extension as 'typeof'? I am not want to explore and change sources now, maybe someone cam help? -- Best regards, Alexander mailto:[EMAIL PROTECTED]
Re: How implemented "typeof"
Alexander wrote: Hello! How I can know more about implementation at 'tree' level such extension as 'typeof'? I am not want to explore and change sources now, maybe someone cam help? your two desires conflict. typeof is implemented in cp/rtti.c nathan -- Nathan Sidwell:: http://www.codesourcery.com :: CodeSourcery LLC [EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk
GCC-3.4.5 Release Status
Hi, At the moment, we have only one bug I consider release critical for 3.4.5. middle-end/24804 Produces wrong code This bugs was reported against 3.4.4; it is a bit odd because it is wrong code generation with '-O3 -fno-strict-aliasing'. Mark, RTH, could you provide hints? I'm running the pre-releasing script, so a new prerelease tarball will be available today. The final release is for the end of this month. -- Gabriel Dos Reis [EMAIL PROTECTED]
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Sun, 2005-11-27 at 03:14, Mike Stump wrote: > On Nov 22, 2005, at 7:52 AM, Richard Earnshaw wrote: > > 3) A volatile load isn't moved across any store that may alias (though > > I'd expect that to be volatile if there's a real risk of aliasning, so > > maybe we could have another dimension in the 'may-alias' test here). > > ? Is this just a restatement of the general rule that one cannot > move a load across a store that may alias? If so, we don't need to > list it here, as it comes under the normal rules of what one may not > do, and sense we don't relist all of them, there isn't any point in > listing any of them. Possibly, but I think the more interesting observation is listed in parenthesis: Can a volatile access ever alias a non-volatile access? Logic would suggest that a program is unpredictable if written in such a way that permits such aliases to exist, since it would mean a location is being accessed in both non-volatile and volatile manner. R.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Richard Earnshaw wrote: Possibly, but I think the more interesting observation is listed in parenthesis: Can a volatile access ever alias a non-volatile access? I think the answer is no, Certainly Ada has compile time rules carefully written to make this impossible. Logic would suggest that a program is unpredictable if written in such a way that permits such aliases to exist, since it would mean a location is being accessed in both non-volatile and volatile manner. Exactly!
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Robert Dewar wrote: Richard Earnshaw wrote: Possibly, but I think the more interesting observation is listed in parenthesis: Can a volatile access ever alias a non-volatile access? I think the answer is no, Certainly Ada has compile time rules carefully written to make this impossible. Usually I try to avoid the realm of "hypotetical targets" and hypotetical optimizations, but I can imagine a target where one bit of a register needs to be accessed in a volatile way and the others need not. Then, I don't know if it would be legal to optimize struct r { unsigned int x : 7; volatile unsigned int y : 1; }; struct r my_reg; So that my_reg.x is accessed with a non-volatile mem, and my_reg.y is accessed with a volatile one. Would such an optimization be possible within the Ada compile-time rules? Paolo
s390{,x} ABI incompatibility between gcc 4.0 and 4.1
Hi! There are several g*dg/compat/ tests failing that show ABI incompatibilities: FAIL: tmpdir-g++.dg-struct-layout-1/t024 cp_compat_x_tst.o-cp_compat_y_alt.o execute FAIL: tmpdir-g++.dg-struct-layout-1/t024 cp_compat_x_alt.o-cp_compat_y_tst.o execute FAIL: tmpdir-g++.dg-struct-layout-1/t026 cp_compat_x_tst.o-cp_compat_y_alt.o execute FAIL: tmpdir-g++.dg-struct-layout-1/t026 cp_compat_x_alt.o-cp_compat_y_tst.o execute FAIL: tmpdir-g++.dg-struct-layout-1/t027 cp_compat_x_tst.o-cp_compat_y_alt.o execute FAIL: tmpdir-g++.dg-struct-layout-1/t027 cp_compat_x_alt.o-cp_compat_y_tst.o execute FAIL: tmpdir-g++.dg-struct-layout-1/t028 cp_compat_x_tst.o-cp_compat_y_alt.o execute FAIL: tmpdir-g++.dg-struct-layout-1/t028 cp_compat_x_alt.o-cp_compat_y_tst.o execute FAIL: tmpdir-gcc.dg-struct-layout-1/t025 c_compat_x_tst.o-c_compat_y_alt.o execute FAIL: tmpdir-gcc.dg-struct-layout-1/t025 c_compat_x_alt.o-c_compat_y_tst.o execute FAIL: tmpdir-gcc.dg-struct-layout-1/t027 c_compat_x_tst.o-c_compat_y_alt.o execute FAIL: tmpdir-gcc.dg-struct-layout-1/t027 c_compat_x_alt.o-c_compat_y_tst.o execute FAIL: tmpdir-gcc.dg-struct-layout-1/t028 c_compat_x_tst.o-c_compat_y_alt.o execute FAIL: tmpdir-gcc.dg-struct-layout-1/t028 c_compat_x_alt.o-c_compat_y_tst.o execute I have looked just at one failure, but maybe all of them are the same thing. typedef char __attribute__((vector_size (16))) v16qi; int i = __alignof__ (v16qi); with GCC 4.0 sets i to 8 (s390{,x} have BIGGEST_ALIGNMENT 64), but GCC 4.1 sets i to 16. The changes that created this binary incompatibility are http://gcc.gnu.org/bugzilla/show_bug.cgi?id=23467 I think. layout_type sets TYPE_ALIGN to 128 bits (size of v16qi) and in 4.0 and earlier finalize_type_size used to decrease the size to GET_MODE_ALIGNMENT (TImode), which is 64 on s390{,x}. Was this change intentional? If yes, I think it should be documented in 4.1 release notes, but I still hope it wasn't intentional. Jakub
C++ vague linkage data
Hello, when gcc emits vague linkage data for C++ like vtables it makes them all weak. Is there some reason why this needs to be done? If I'm getting it right, based on e.g. on the comment in binutils/bfd/elf.c saying that they are weak in order to allow multiple copies and that the GNU ld extension linkonce will discard all but one, this seems to be done only for historical reasons (or call that compatibility, whatever). With the usual setup of using the complete GNU toolchain, there will be always only one such symbol because of the linkonce feature in each resulting binary/library. If there will be more such libraries each of them having the same symbol, the normal ELF symbol rules will bind all references only to one of them, and those symbols are all the same anyway. Which means that in such case there's no reason to have those symbols weak, and having them weak means that the symbol lookup in ld.so for them will be more expensive (because it has to search all libraries for a non-weak symbol only to find out there's obviously no such thing). Is there some reason why this shouldn't be changed to have these symbols emitted normally as non-weak? On a somewhat related note, I've been looking at how to better control emitting of such vague linkage data. It's not that difficult to have many of these things emitted in many places, which I expect should lead to things like longer build times, larger binaries, even more conflicts with prelink, non-unique symbols problems if one doesn't (want to) use RTLD_GLOBAL and such (well, I don't just expect the last two ones to happen - they do). Info pages info:/gcc/Vague Linkage and info:/gcc/C++ Interface are quite helpful on this, so I think I'll try the effect of #pragma interface/implementation . However the docs give me kind of the impression that this is not really the recommended way of doing things - is there some better way or are there any problems with these pragmas I should expect? I also wonder about the exact effect of these on templates, those pages and info:/gcc/Template Instantiation are a bit unclear (and obsolete?) on this topic. From my test it seems those pragmas don't have any special effect as far as templates are concerned - am I right on this? Thanks gcc --version : gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux) -- Lubos Lunak KDE developer - SuSE CR, s.r.o. e-mail: [EMAIL PROTECTED] , [EMAIL PROTECTED] Drahobejlova 27 tel: +420 2 9654 2373 190 00 Praha 9 fax: +420 2 9654 2374 Czech Republic http://www.suse.cz/
Re: C++ vague linkage data
On Mon, Nov 28, 2005 at 04:10:55PM +0100, Lubos Lunak wrote: > when gcc emits vague linkage data for C++ like vtables it makes them all > weak. Is there some reason why this needs to be done? > > If I'm getting it right, based on e.g. on the comment in binutils/bfd/elf.c > saying that they are weak in order to allow multiple copies and that the GNU > ld extension linkonce will discard all but one, this seems to be done only > for historical reasons (or call that compatibility, whatever). With the usual > setup of using the complete GNU toolchain, there will be always only one such > symbol because of the linkonce feature in each resulting binary/library. If > there will be more such libraries each of them having the same symbol, the > normal ELF symbol rules will bind all references only to one of them, and > those symbols are all the same anyway. > > Which means that in such case there's no reason to have those symbols weak, > and having them weak means that the symbol lookup in ld.so for them will be > more expensive (because it has to search all libraries for a non-weak symbol > only to find out there's obviously no such thing). Is there some reason why > this shouldn't be changed to have these symbols emitted normally as non-weak? glibc ld.so doesn't work that way for almost 3 years now, it doesn't special case weak symbols, first matching symbol is returned and I believe glibc was the only one that violated the spec in that case. Jakub
Re: C++ vague linkage data
On Mon, Nov 28, 2005 at 04:10:55PM +0100, Lubos Lunak wrote: > Which means that in such case there's no reason to have those symbols weak, > and having them weak means that the symbol lookup in ld.so for them will be > more expensive (because it has to search all libraries for a non-weak symbol > only to find out there's obviously no such thing). That's not right. At least glibc's ld.so has not done this by default in years; only if you export LD_DYNAMIC_WEAK=1. Weak defs are treated exactly the same as strong defs during dynamic lookup, by default. -- Daniel Jacobowitz CodeSourcery, LLC
Re: GCC-3.4.5 Release Status
Gabriel Dos Reis wrote: > Mark, RTH, could you provide hints? I don't have any ideas, just from looking atthe problem. It could be a stack allocation problem, where we assign two things the same stack slot, and get confused. -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 3:00 AM, Richard Earnshaw wrote: Possibly, but I think the more interesting observation is listed in parenthesis: Can a volatile access ever alias a non-volatile access? Logic would suggest that a program is unpredictable if written in such a way that permits such aliases to exist, since it would mean a location is being accessed in both non-volatile and volatile manner. I think this is uninteresting, as I'd presume that it is valid, and the user really wanted to do it (the below is pseudo code, no, really): int i[4096]; void foo() { i[0] = 0; mmap (... &i[0], ...); (*(volatile int *)&i[0]) = 1; } Anyway, why it is uninteresting is, the normal rules say they alias, so they alias. And for people that just want to quote the standard, don't bother, let me: [#5] If an attempt is made to modify an object defined with a const-qualified type through use of an lvalue with non- const-qualified type, the behavior is undefined. If an attempt is made to refer to an object defined with a volatile-qualified type through use of an lvalue with non- volatile-qualified type, the behavior is undefined.102)
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 3:13 AM, Robert Dewar wrote: Possibly, but I think the more interesting observation is listed in parenthesis: Can a volatile access ever alias a non-volatile access? I think the answer is no, Certainly Ada has compile time rules carefully written to make this impossible. gcc is not just an Ada compiler. Clearly, the answer has to be yes to support GNU C.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
> > On Nov 28, 2005, at 3:13 AM, Robert Dewar wrote: > >> Possibly, but I think the more interesting observation is listed in > >> parenthesis: Can a volatile access ever alias a non-volatile access? > > > > I think the answer is no, Certainly Ada has compile time rules > > carefully written to make this impossible. > > gcc is not just an Ada compiler. Clearly, the answer has to be yes > to support GNU C. While it is true that GCC is not just an Ada compiler but I think we should follow a sane set of rules for GNU C which might mean following Ada's rules for this case. What is GNU C if it is not well documented? -- Pinski
Re: C++ vague linkage data
On Mon, Nov 28, 2005 at 04:10:55PM +0100, Lubos Lunak wrote: > when gcc emits vague linkage data for C++ like vtables it makes them all > weak. Is there some reason why this needs to be done? In the case of vtables, they are only weak if all the virtual functions are defined as inline. Otherwise the vtable is defined only in the .o file that defines the first non-inline virtual function.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 9:18 AM, Andrew Pinski wrote: What is GNU C if it is not well documented? :-) ^L Useful.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
>> Possibly, but I think the more interesting observation is listed in >> parenthesis: Can a volatile access ever alias a non-volatile access? > > I think the answer is no, Certainly Ada has compile time rules > carefully written to make this impossible. gcc is not just an Ada compiler. Clearly, the answer has to be yes to support GNU C. And where is the standard for the language known as "GNU C"? One of the major problems with the GNU C extensions has been that they were done in an era where people weren't as precise as they are today regarding language definition and the interactions of features with each other and with optimizations. Obviously, Robert was giving Ada as an example of a language that *does* precisely give the answers to these questions, unlikely GNU C. Another interesting datapoint is clearly what the current C and C++ standards say on the matter. However, despite the legacy code issue, I wouldn't put too much import on the relatively informal definition of "GNU C".
RE: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Mike Stump wrote: > On Nov 28, 2005, at 3:00 AM, Richard Earnshaw wrote: >> Possibly, but I think the more interesting observation is listed in >> parenthesis: Can a volatile access ever alias a non-volatile access? >> Logic would suggest that a program is unpredictable if written in such a >> way that permits such aliases to exist, since it would mean a location >> is being accessed in both non-volatile and volatile manner. > > I think this is uninteresting, as I'd presume that it is valid, and > the user really wanted to do it (the below is pseudo code, no, really): > > int i[4096]; > > void foo() { > i[0] = 0; > mmap (... &i[0], ...); > (*(volatile int *)&i[0]) = 1; > } > > Anyway, why it is uninteresting is, the normal rules say they alias, > so they alias. And for people that just want to quote the standard, > don't bother, let me: > > [#5] If an attempt is made to modify an object defined with > a const-qualified type through use of an lvalue with non- > const-qualified type, the behavior is undefined. If an > attempt is made to refer to an object defined with a > volatile-qualified type through use of an lvalue with non- > volatile-qualified type, the behavior is undefined.102) :) I seem to remember this case having come up in a very long thread earlier this year, when we were discussing whether what was relevant was the actual type and quals of the object being accessed, or the type and quals of the decl/pointer through which it was being accessed. And all of this is quite a way off from where I came in. We're worrying about moving accesses past each other, and aliasing, and so on. All I ever wanted to do was combine a *single* volatile access with an immediately-consecutive ALU operation on the result of that volatile access. BTW, I never did manage to find the patches you referred to in your postings from summer 2000. Googling for "mike stump volatile_ok" just kept on finding me the post where you were advising someone to find your patches by searching for your name and volatile_ok. Kinda recursive, that do you still have a pointer to them? cheers, DaveK -- Can't think of a witty .sigline today
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 9:18 AM, Andrew Pinski wrote: While it is true that GCC is not just an Ada compiler but I think we should follow a sane set of rules for GNU C which might mean following Ada's rules for this case. Because GNU C doesn't have rules carefully written to make this impossible, our rules are carefully written to make it possible. If you want to code in Ada, feel free.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
> > On Nov 28, 2005, at 9:18 AM, Andrew Pinski wrote: > > While it is true that GCC is not just an Ada compiler but I think > > we should > > follow a sane set of rules for GNU C which might mean following > > Ada's rules > > for this case. > > Because GNU C doesn't have rules carefully written to make this > impossible, our rules are carefully written to make it possible. If > you want to code in Ada, feel free. Huh? they are not carefully written at all. This is why I said what is GNU C? Again the language is not written out so it means anything. If poeple want something defined the way they like it, please write some documentation that way and post a patch. I could care which way it goes except right now, we can do anything because it is not that well written (well it is undocuemented). -- Pinski
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 9:33 AM, Richard Kenner wrote: And where is the standard for the language known as "GNU C"? You can obtain the ISO definition for C from ISO: 61)The intent of this list is to specify those circumstances in which an object may or may not be aliased. [#7] An object shall have its stored value accessed only by an lvalue expression that has one of the following types:61) -- a type compatible with the effective type of the object, -- a qualified version of a type compatible with the effective type of the object,
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 9:41 AM, Andrew Pinski wrote: Huh? they are not carefully written at all. This is why I said what is GNU C? Again the language is not written out so it means anything. So then clearly, since it means anything, we can change gcc to accept pascal instead of C? Right? This is absurd. If poeple want something defined the way they like it, please write some documentation that way and post a patch. I could care which way it goes except right now, we can do anything because it is not that well written (well it is undocuemented). I disagree. For example, there is behavior mandated by the Standard for C, such as this, that, reasonably, I think we have to follow. You can argue that we don't have to follow the standard but I'm not just going to listen to you.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Mon, Nov 28, 2005 at 09:53:31AM -0800, Mike Stump wrote: > On Nov 28, 2005, at 9:41 AM, Andrew Pinski wrote: > >Huh? they are not carefully written at all. This is why I said what > >is GNU C? Again the language is not written out so it means anything. > > So then clearly, since it means anything, we can change gcc to accept > pascal instead of C? Right? This is absurd. Mike, you wrote "GNU C", not "ISO C". There's no spec for the former.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 9:29 AM, Dave Korn wrote: BTW, I never did manage to find the patches you referred to in your postings from summer 2000. Googling for "mike stump volatile_ok" just kept on finding me the post where you were advising someone to find your patches by searching for your name and volatile_ok. Kinda recursive, that do you still have a pointer to them? As background for others, http://gcc.gnu.org/ml/gcc-bugs/1999-12n/msg00801.html has an example of why one would want to do this. Anyway, not rocket science: Doing diffs in .: --- ./recog.c.~1~ 2005-10-28 10:40:18.0 -0700 +++ ./recog.c 2005-11-28 10:10:31.0 -0800 @@ -956,9 +956,6 @@ general_operand (rtx op, enum machine_mo { rtx y = XEXP (op, 0); - if (! volatile_ok && MEM_VOLATILE_P (op)) - return 0; - /* Use the mem's mode, since it will be reloaded thus. */ if (memory_address_p (GET_MODE (op), y)) return 1; -- Good for testing, but a real patch would have to: http://gcc.gnu.org/ml/gcc/2001-11/msg00398.html
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 10:08 AM, Joe Buck wrote: So then clearly, since it means anything, we can change gcc to accept pascal instead of C? Right? This is absurd. Mike, you wrote "GNU C", not "ISO C". There's no spec for the former. He said we can do anything, this is untrue. I rail against the casual use of the word because it misleads others into believing it, and then proposing patches that do anything they want, and yet make gcc worse. I realize there that we have no documentation person that writes down everything religiously and that from time to time, people that should write documentation, don't. I'm just as guilty (or more so) as everyone else. Realize, some consider that just a simple documentation bug, not an opportunity to go messing with a fine compiler.
Warning bug with -fPIC? (was Re: Some testsuite cleanups (mostly for -fPIC))
All, This is from an email trail on gcc-patches. I was attempting to clean up differences in the test suite between -fPIC and no -fPIC tests. * gcc.dg/assign-warn-3.c: Ditto. Why in the world do you imagine this should depend on -fpic? Here's the case that passes (no -fPIC): Executing on host: /u2/gcc/head/osr5/gcc/xgcc -B/u2/gcc/head/osr5/gcc/ /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c -O3 -std=c99 -pedantic-errors -fno-show-column -S -o assign-warn-3.s(timeout = 300) /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c: In function 'g1': /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c:13: warning: pointer targets in passing argument 1 of 'f1' differ in signedness /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c: In function 'g0': /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c:9: warning: pointer targets in passing argument 1 of 'f0' differ in signedness output is: /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c: In function 'g1': /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c:13: warning: pointer targets in passing argument 1 of 'f1' differ in signedness /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c: In function 'g0': /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c:9: warning: pointer targets in passing argument 1 of 'f0' differ in signedness PASS: gcc.dg/assign-warn-3.c (test for warnings, line 9) PASS: gcc.dg/assign-warn-3.c (test for warnings, line 13) PASS: gcc.dg/assign-warn-3.c (test for excess errors) And here is the case that fails (-fPIC). I have no idea why those warnings are not being ejected when compiling with -fPIC. Perhaps I discovered a bug here by accident? I guess I should have looked at the test case more carefully instead of just trying to silence the failure. Executing on host: /u2/gcc/head/osr5/gcc/xgcc -B/u2/gcc/head/osr5/gcc/ /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c -O3 -std=c99 -pedantic-errors -fno-show-column -S -fPIC -o assign-warn-3.s (timeout = 300) FAIL: gcc.dg/assign-warn-3.c (test for warnings, line 9) FAIL: gcc.dg/assign-warn-3.c (test for warnings, line 13) PASS: gcc.dg/assign-warn-3.c (test for excess errors) Is this indeed a bug? Kean
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
He said we can do anything, this is untrue. I rail against the casual use of the word because it misleads others into believing it, and then proposing patches that do anything they want, and yet make gcc worse. I realize there that we have no documentation person that writes down everything religiously and that from time to time, people that should write documentation, don't. I'm just as guilty (or more so) as everyone else. Realize, some consider that just a simple documentation bug, not an opportunity to go messing with a fine compiler. It's not that simple and I suspect you know it. The things that aren't written down are the "contract" between the programmer and the compiler about what the latter is guaranteeing to the former. It's nice to say that a compiler won't ever make a change that won't break *any* program, but that's impossible. Even the slightest change can "break" a program that relies on an uninitialized variable, for example. Obviously, we understand that's permissible because such a program is "not correct" or "erroneous". But the problem in extending that doctrine is that we have to know which programs are erroneous and when there's no precise documentation of the language in which they're written (GNU C), there's no way to say with certainty whether a program is or not. So there's no way to know whether the set of programs that a given change will break consists of just erroneous programs. One can take the very conservative approach of avoiding that problem by not making *any* changes that could conceivably break a program, but that would mean making no progress on improving a "fine compiler".
Re: GCC-3.4.5 Release Status
Mark Mitchell <[EMAIL PROTECTED]> writes: | Gabriel Dos Reis wrote: | | > Mark, RTH, could you provide hints? | | I don't have any ideas, just from looking atthe problem. It could be a | stack allocation problem, where we assign two things the same stack | slot, and get confused. Thanks! -- Gaby
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Mike Stump wrote: gcc is not just an Ada compiler. Clearly, the answer has to be yes to support GNU C. Right, I agree, I was answering whether this can ever be done legitimately, and the answer is really no, it is undefined in C, and if you manage to do it in Ada, which you can if you really try by unchecked conversion of pointer types etc, then it is erroneous there too. So I don't see that the compiler has to allow for this in any explicit manner, given that its effect is undefined in C.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Mike Stump wrote: I disagree. For example, there is behavior mandated by the Standard for C, such as this, that, reasonably, I think we have to follow. You can argue that we don't have to follow the standard but I'm not just going to listen to you. Hmm, I guess I misread the standard, I thought it said the effect was undefined.
Re: GCC-3.4.5 Release Status
Gabriel Dos Reis <[EMAIL PROTECTED]> writes: | I'm running the pre-releasing script, so a new prerelease tarball will be | available today. The tarballs are available for download and testing here: ftp://gcc.gnu.org/pub/gcc/prerelease-3.4.5-20051128/ -- Gaby
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 11:05 AM, Robert Dewar wrote: Right, I agree, I was answering whether this can ever be done legitimately, and the answer is really no, it is undefined in C It is not.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 11:12 AM, Robert Dewar wrote: Mike Stump wrote: I disagree. For example, there is behavior mandated by the Standard for C, such as this, that, reasonably, I think we have to follow. You can argue that we don't have to follow the standard but I'm not just going to listen to you. Hmm, I guess I misread the standard, I thought it said the effect was undefined. Only in one direction does the standard make it undefined, as I quoted. I know why they do this, and I am arguing that that latitude should not be used to try and `optimize' things to make them behave differently (such as calling abort for example) in the presence of volatile.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Mike Stump wrote: Only in one direction does the standard make it undefined, as I quoted. I know why they do this, and I am arguing that that latitude should not be used to try and `optimize' things to make them behave differently (such as calling abort for example) in the presence of volatile. There is no point in deliberately creating bad behavior, but on the other hand, there is no basis for suppressing a generally useful optimization to guarantee someones idea of a definition of undefined. I do agree that if a) everyone agrees on what the "sensible" definition is b) the optimization is not valuable then it is better to behave as expected.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Robert Dewar <[EMAIL PROTECTED]> writes: [...] | I do agree that if | | a) everyone agrees on what the "sensible" definition is We do have a standard definied beahviour. | b) the optimization is not valuable for those people who don't care about the standard semantics, there is always an option to provide a compiler switch. -- Gaby
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Gabriel Dos Reis wrote: | I do agree that if | | a) everyone agrees on what the "sensible" definition is We do have a standard definied beahviour. in one case, and of course we must adhere to this, but not in the other case | b) the optimization is not valuable for those people who don't care about the standard semantics, there is always an option to provide a compiler switch. I am talking only about the undefined case
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 10:55 AM, Richard Kenner wrote: It's not that simple and I suspect you know it. Yes, this is all fine and very well, but do you realize that Andrew wanted to break gcc behavior as mandated by the ISO standard? This is very, very simple. The answer is no. I'm not budging on this, really.
Re: LLVM/GCC Integration Proposal
> "Chris" == Chris Lattner <[EMAIL PROTECTED]> writes: >> Only the Ada frontend seems to be in a state to maybe support direct >> frontend IR to LLVM translation. Chris> Sure, also maybe Fortran? FWIW gcjx was designed to make this easy to do. And just yesterday a volunteer started working on a gcjx/llvm bridge. Tom
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
> > On Nov 28, 2005, at 10:55 AM, Richard Kenner wrote: > > It's not that simple and I suspect you know it. > > Yes, this is all fine and very well, but do you realize that Andrew > wanted to break gcc behavior as mandated by the ISO standard? This > is very, very simple. The answer is no. I'm not budging on this, > really. I was, there was no where in I was saying we should break ISO standard but I was saying that GNU C needs to better defined and if in that point, making it clearer that GNU C is superset of ISO C, that is the best way of doing it. Making what volatile means to GNU C clearer is no different than making saying in the documention what volatile means to the ISO C standard. -- Pinski
Re: Warning bug with -fPIC? (was Re: Some testsuite cleanups (mostly for -fPIC))
On Nov 28, 2005, at 10:42 AM, Kean Johnston wrote: * gcc.dg/assign-warn-3.c: Ditto. Why in the world do you imagine this should depend on -fpic? And here is the case that fails (-fPIC). I have no idea why those warnings are not being ejected when compiling with -fPIC. Perhaps I discovered a bug here by accident? I guess I should have looked at the test case more carefully instead of just trying to silence the failure. Executing on host: /u2/gcc/head/osr5/gcc/xgcc -B/u2/gcc/head/osr5/ gcc/ /u2/gcc/head/gcc/gcc/testsuite/gcc.dg/assign-warn-3.c -O3 - std=c99 -pedantic-errors -fno-show-column -S -fPIC -o assign- warn-3.s (timeout = 300) FAIL: gcc.dg/assign-warn-3.c (test for warnings, line 9) FAIL: gcc.dg/assign-warn-3.c (test for warnings, line 13) PASS: gcc.dg/assign-warn-3.c (test for excess errors) Is this indeed a bug? Sounds like a bug.
Re: LLVM/GCC Integration Proposal
> "Chris" == Chris Lattner <[EMAIL PROTECTED]> writes: Chris> In this role, it provides a static optimizer, interprocedural link- Chris> time optimizer, JIT support, and several other features. I'm quite interested in the JIT aspect of LLVM, for gcj. This would fill one of our major missing gaps. However, it seems to me that for this to work well in practice, it would mean that the code generators would have to be under a "runtime friendly" license -- that is, GPL would probably not be appropriate. This leads into somewhat ugly territory ... either requiring a "pure LLVM" back end for JIT support, or relicensing RTL back ends to something else. Chris> The IR supports several features that are useful to various Chris> communities, including true tail calls, accurate garbage collection, Chris> etc. Us gcj hackers would also like accurate GC support. Java is fairly dynamic, as I'm sure you know. So, I'm much more interested in the JITting possibilities than in link time optimizations; the latter is cool and probably useful to embedded users of gcj, but I'd imagine for all our recent binary compatibility deployments we would just end up ignoring it. Tom
Re: LLVM/GCC Integration Proposal
Tom Tromey writes: > > Java is fairly dynamic, as I'm sure you know. So, I'm much more > interested in the JITting possibilities than in link time > optimizations; the latter is cool and probably useful to embedded > users of gcj, but I'd imagine for all our recent binary compatibility > deployments we would just end up ignoring it. Kinda. Much runtime is spent inside the language core of libgcj, and that would benefit greatly from aggressive IPA. Andrew.
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Mon, 2005-11-28 at 14:10 +0100, Paolo Bonzini wrote: > Then, I don't know if it would be legal to optimize > >struct r { > unsigned int x : 7; > volatile unsigned int y : 1; >}; > >struct r my_reg; > > So that my_reg.x is accessed with a non-volatile mem, and my_reg.y is > accessed with a volatile one. Would such an optimization be possible > within the Ada compile-time rules? procedure P is type T7 is mod 2**7; type T1 is mod 2; pragma Volatile (T1); type R is record X : T7; Y : T1; end record; for R'Size use 8; for R'Alignment use 1; for R use record X at 0 range 0 .. 6; Y at 0 range 7 .. 7; end record; Z : R; A : T7; B : T1; begin Z.X := 127; Z.Y := 1; A := Z.X; B := Z.Y; end P; trunk gcc -O2 -S p.adb on x86-linux gives: _ada_p: pushl %ebp movl%esp, %ebp subl$16, %esp movb$1, -1(%ebp) leave ret My understanding is that B := Z.Y implied memory read should not be optimized away since it is a volatile read and thus an external effect, so this looks like a bug to me. (If this is not supported, GNAT should fail at compile time.) Z.X write and read are correctly optimized away: non volatile read and write have no external effects here. Laurent
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
On Nov 28, 2005, at 12:40 PM, Andrew Pinski wrote: I was, there was no where in I was saying we should break ISO standard The effect of following Ada's rules: While it is true that GCC is not just an Ada compiler but I think we should follow a sane set of rules for GNU C which might mean following Ada's rules for this case. would depart from ISO mandated behavior.
Re: matching function for out_waiting in gcc 3.4.2
Hi, this question is not about development of GCC so should be moved to either [EMAIL PROTECTED] or [EMAIL PROTECTED] - I suggest the former. Please send any other replies to the libstdc++ list, thanks. On Mon, Nov 28, 2005 at 02:03:21PM +0530, [EMAIL PROTECTED] wrote: > I have moved to gcc version 3.4.2(linux sll) So I am migrating a > component to this version from gcc 2.96. > > In my existing code I am using the *out_waiting* function of the struct > streambuf present in the streambuf.h file. > > But I can't find this function in this version of gcc 3.4.2. so can u > help me in finding a matching function in this version. The function was available on "classic" iostreams, but is not included in the std::streambuf class from the C++ standard. Because libstdc++ aims to be standard-conforming, it does not provide the classic iostreams API. There is no direct equivalent of out_waiting for std::streambuf. If you need to know the number of characters in the put area you could define your own derived streambuf and implement out_waiting in terms of the protected member functions std::streambuf::pptr() and std::streambuf::pbase(), and use that where necessary. I hope that helps a little, jon
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
> > On Nov 28, 2005, at 12:40 PM, Andrew Pinski wrote: > > I was, there was no where in I was saying we should break ISO standard > > The effect of following Ada's rules: > > > While it is true that GCC is not just an Ada compiler but I think > > we should > > follow a sane set of rules for GNU C which might mean following > > Ada's rules > > for this case. > > would depart from ISO mandated behavior. I said might, I never said we will follow the Ada rules. Please read my email as requesting for more documention on this matter rather than requesting we change to the wrong behavior. And what is sane rules depends on what people think are sane. -- Pinski
Re: Why doesn't combine like volatiles? (volatile_ok again, sorry!)
Laurent GUERBY wrote: On Mon, 2005-11-28 at 14:10 +0100, Paolo Bonzini wrote: Then, I don't know if it would be legal to optimize struct r { unsigned int x : 7; volatile unsigned int y : 1; }; struct r my_reg; So that my_reg.x is accessed with a non-volatile mem, and my_reg.y is accessed with a volatile one. Would such an optimization be possible within the Ada compile-time rules? There is no analogous declaration possible in Ada.
Re: Wiki pages on tests cases
On Mon, Nov 28, 2005 at 12:45:41AM -0800, Jim Blandy wrote: > On 11/27/05, Jonathan Wakely <[EMAIL PROTECTED]> wrote: > > Yes, I know it's a wiki and I can do this myself, but I only have so > > much spare time and maybe the second one was added for a good reason. > > http://en.wikipedia.org/wiki/Be_bold > > Works for them. Are you volunteering to do it then? :) I'd already proof-read and corrected two pages before I spotted the duplicate test-writing ones, and am spending what little spare time I have extending the libstdc++ docs, which for my purposes are more important. I wasn't just demanding others do the work while I twiddle my thumbs! jon
Re: GCC-3.4.5 Release Status
On Mon, Nov 28, 2005 at 09:20:21PM +0100, Gabriel Dos Reis wrote: > Gabriel Dos Reis <[EMAIL PROTECTED]> writes: > > | I'm running the pre-releasing script, so a new prerelease tarball will be > | available today. > > The tarballs are available for download and testing here: > >ftp://gcc.gnu.org/pub/gcc/prerelease-3.4.5-20051128/ Tests (including Ada tests this time) for i686-pc-linux-gnu on RHEL 3.0 are at http://gcc.gnu.org/ml/gcc-testresults/2005-11/msg01334.html They are essentially perfect (only the one known failure). I also ran tests on an x86_64-unknown-linux-gnu RHEL 3.0 box (no Ada). Here we see a number of failures: http://gcc.gnu.org/ml/gcc-testresults/2005-11/msg01333.html Almost all the C failures are in tests that disable themselves if NO_TRAMPOLINES is defined, so I'm assuming trampolines don't work, perhaps because of some kind of stack protection on the box I'm using. But this is similar to what I've seen before. I don't know anything about the libffi test failures. I also am running tests on an ia64 box running RHEL AW2.1 (I know, old stuff, but I don't control that box) with binutils 2.14. Tests aren't done yet (so I can't yet point to the gcc-testresults URL), but there are the following gcc failures: FAIL: gcc.dg/compat/scalar-by-value-3 c_compat_x_tst.o compile FAIL: gcc.dg/compat/scalar-return-3 c_compat_x_tst.o compile FAIL: gcc.dg/compat/struct-by-value-18 c_compat_x_tst.o compile FAIL: gcc.dg/compat/struct-by-value-7a c_compat_x_tst.o-c_compat_y_tst.o execute FAIL: gcc.dg/compat/struct-by-value-7b c_compat_x_tst.o-c_compat_y_tst.o execute FAIL: gcc.dg/20021014-1.c (test for excess errors) FAIL: gcc.dg/cleanup-10.c execution test FAIL: gcc.dg/cleanup-11.c execution test FAIL: gcc.dg/cleanup-8.c execution test FAIL: gcc.dg/cleanup-9.c execution test FAIL: gcc.dg/nest.c (test for excess errors) FAIL: gcc.dg/special/gcsec-1.c (test for excess errors) and one g++ failure: FAIL: g++.old-deja/g++.law/profile1.C (test for excess errors) The libstdc++ failures below are presumably due to a really old glibc: FAIL: 22_locale/codecvt/encoding/wchar_t/wrapped_locale.cc execution test FAIL: 22_locale/codecvt/in/wchar_t/9.cc execution test FAIL: 22_locale/codecvt/max_length/wchar_t/wrapped_locale.cc execution test FAIL: 22_locale/collate/transform/char/2.cc execution test FAIL: 22_locale/collate/transform/char/3.cc execution test FAIL: 22_locale/collate/transform/char/wrapped_env.cc execution test FAIL: 22_locale/collate/transform/char/wrapped_locale.cc execution test FAIL: 22_locale/collate/transform/wchar_t/2.cc execution test FAIL: 22_locale/collate/transform/wchar_t/3.cc execution test FAIL: 22_locale/collate/transform/wchar_t/wrapped_env.cc execution test FAIL: 22_locale/collate/transform/wchar_t/wrapped_locale.cc execution test FAIL: 22_locale/collate_byname/named_equivalence.cc execution test FAIL: 22_locale/ctype/widen/wchar_t/2.cc execution test FAIL: 22_locale/locale/cons/12438.cc execution test XPASS: 22_locale/locale/cons/12658_thread.cc execution test FAIL: 22_locale/money_put/put/char/9780-3.cc execution test XPASS: 26_numerics/c99_classification_macros_c.cc (test for excess errors) FAIL: 27_io/basic_filebuf/imbue/wchar_t/12868.cc execution test FAIL: 27_io/basic_filebuf/overflow/wchar_t/11305-1.cc execution test FAIL: 27_io/basic_filebuf/overflow/wchar_t/11305-2.cc execution test FAIL: 27_io/basic_filebuf/overflow/wchar_t/11305-3.cc execution test FAIL: 27_io/basic_filebuf/overflow/wchar_t/11305-4.cc execution test FAIL: 27_io/basic_filebuf/underflow/wchar_t/9520.cc execution test XPASS: 27_io/fpos/14320-1.cc execution test and there are 45 libffi failures.
Re: GCC-3.4.5 Release Status
Joe Buck <[EMAIL PROTECTED]> writes: [...] | Tests (including Ada tests this time) for i686-pc-linux-gnu on RHEL 3.0 | are at | | http://gcc.gnu.org/ml/gcc-testresults/2005-11/msg01334.html | | They are essentially perfect (only the one known failure). | | I also ran tests on an x86_64-unknown-linux-gnu RHEL 3.0 box (no Ada). Here | we see a number of failures: | | http://gcc.gnu.org/ml/gcc-testresults/2005-11/msg01333.html Thanks for the report! | Almost all the C failures are in tests that disable themselves if | NO_TRAMPOLINES is defined, so I'm assuming trampolines don't work, | perhaps because of some kind of stack protection on the box I'm using. | But this is similar to what I've seen before. | | I don't know anything about the libffi test failures. | | I also am running tests on an ia64 box running RHEL AW2.1 (I know, old | stuff, but I don't control that box) with binutils 2.14. Tests aren't | done yet (so I can't yet point to the gcc-testresults URL), but there are | the following gcc failures: I would need an ia64 maintainer to comment on this. From http://gcc.gnu.org/ml/gcc-testresults/2005-11/ it looks to me like the failures are almost identical (some are even UNRESOLVED). -- Gaby
Performance regression testing?
We're collectively putting a lot of energy into performance improvements in GCC. Sometimes, a performance gain from one patch gets undone by another patch -- which is itself often doing something else beneficial. People have mentioned to me that we require people to run regression tests for correctness, but that we don't really have anything equivalent for performance. Clearly, performance testing is harder than correctness testing; correctness is binary, while performance is a continuum. Machine load affects performance numbers. It's reasonable to strive for no correctness regressions, but introducing new optimizations is often (always?) going to cause some code to perform worse. If an optimization was unsafe, then correctness concerns may require that we generate inferior code. So, it's hard problem. The basic question I'm asking myself is: "Is there some pre check-in testing we could do that would help make sure we're not backsliding?" My goal is to make us aware of performance, without imposing anything too burdensome. As a strawman, perhaps we could add a small integer program (bzip?) and a small floating-point program to the testsuite, and have DejaGNU print out the number of iterations of each that run in 10 seconds. The results would show up on gcc-testresults automatically, and if we were really eager, we could post the results along with our testing. We wouldn't have to treat inferior numbers as regressions in the same way that we treat ordinary test failures, but maybe something like this would help us to keep our eye on the ball. Again, that's a strawman. I'm just looking for suggestions about what we might to do -- or even feedback that there's no need to do anything. Thanks, -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Re: Java on uClinux
Andrew Haley wrote: Bernd Schmidt writes: > Hmm, we can trap null pointer accesses, but I don't think we deliver > reliable SIGSEGV signals yet or provide a means of getting the faulting > address. If that was fixed, is there anything obvious that stands in > the way of a uClinux/uClibc port? I don't think so. The only other dependency we have is POSIX threads. Ok, thanks. Assuming I get around to porting them, where do I submit libffi/boehm_gc changes? Is there an upstream version to worry about? Bernd
Re: Java on uClinux
> > Andrew Haley wrote: > > Bernd Schmidt writes: > > > > Hmm, we can trap null pointer accesses, but I don't think we deliver > > > reliable SIGSEGV signals yet or provide a means of getting the faulting > > > address. If that was fixed, is there anything obvious that stands in > > > the way of a uClinux/uClibc port? > > > > I don't think so. The only other dependency we have is POSIX threads. > > Ok, thanks. Assuming I get around to porting them, where do I submit > libffi/boehm_gc changes? Is there an upstream version to worry about? There is an upsteam for beohm_gc (Boehm himself). For libffi, unoffficially GCC is the maintainer. -- Pinski
Re: __thread and builtin memcpy() bug
Frank Cusack wrote: See http://bugzilla.redhat.com/bugzilla> for instructions. The bug is not reproducible, so it is likely a hardware or OS problem. - the bug is quite reproducible, why does gcc say otherwise? This is due to a patch that Red Hat has added to the FSF gcc sources. When a Red Hat gcc release gets an ICE, it tries re-executing the command, to see if it fails again. If it doesn't, then that usually indicates a hardware problem. In this case, it may mean a bug in the Red Hat patch, which is something you would have to report to Red Hat. We don't support Red Hat gcc releases here, only FSF ones. As for the underlying bug, the ICE, I can reproduce this with FSF gcc-3.4.x sources. It would be OK to report that bug into the FSF gcc bugzilla database. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: m68k exception handling
Kövesdi György wrote: I built an environment for my 68020 board using gcc-4.0.2 and newlib-1.13.0. Everything seems good, but the exception handling is not working. Getting EH to work for a newlib using target board may be complicated. How EH works depends on whether you are using DWARF2 unwind info, or builtin setjmp and longjmp. The builtin setjmp and longjmp approach is easier to get working, but has a higher run time overhead when no exceptions are thrown. In this scheme, we effectively execute a builtin setjmp everytime you enter an EH region, and a throw is just a builtin longjmp call. This should work correctly if builtin_setjmp and builtin_longjmp are working correctly. See the docs for these builtin functions. The DWARF2 unwind info method has little or no overhead until a exception is thrown. This is the preferred method for most targets. In this scheme, we read the DWARF2 unwind info from the executable when an exception is throw, parse the unwind tables, and then follow the directions encoded in the unwind tables until we reach a catch handler. This approach has obvious problems if you are using a disk-less OS-less target board. This approach also generally requires some C library support, which is present in glibc, but may not be present in newlib. You can find info on this approach here http://gcc.gnu.org/ml/gcc/2004-03/msg01779.html -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Performance regression testing?
On Mon, Nov 28, 2005 at 04:38:58PM -0800, Mark Mitchell wrote: > Clearly, performance testing is harder than correctness testing; > correctness is binary, while performance is a continuum. Machine load > affects performance numbers. It's reasonable to strive for no > correctness regressions, but introducing new optimizations is often > (always?) going to cause some code to perform worse. If an optimization > was unsafe, then correctness concerns may require that we generate > inferior code. So, it's hard problem. It would be possible to detect performance regression after fact, but soon enough to look at reverting patches. For example, given multiple machines doing SPEC benchmark runs every night, the alarm could be raised if a significant performance regression is detected. To guard against noise from machine hiccups, two different machines would have to report a regression to raise the alarm. But the big problem is the non-freeness of SPEC; ideally there would be a benchmark that ... ... everyone can download and run ... is reasonably fast ... is non-trivial > As a strawman, perhaps we could add a small integer program (bzip?) and > a small floating-point program to the testsuite, and have DejaGNU print > out the number of iterations of each that run in 10 seconds. Would that really catch much?
Re: Performance regression testing?
Joe Buck wrote: > It would be possible to detect performance regression after fact, but > soon enough to look at reverting patches. For example, given multiple > machines doing SPEC benchmark runs every night, the alarm could be raised > if a significant performance regression is detected. Right; I think we do some of that at present. I was hoping that having it there when people did test runs would change the psychology; instead of having already checked in a patch, which we're then looking to revert, we'd be making ourselves aware of performance impact before check-in, even for patches that we don't expect to have performance impact. (For major new optimizations, we already expect people to do some benchmarking.) But, yes, this is a definite alternative: we could further automate the SPEC testers, or try to set up more of them. >>As a strawman, perhaps we could add a small integer program (bzip?) and >>a small floating-point program to the testsuite, and have DejaGNU print >>out the number of iterations of each that run in 10 seconds. > > Would that really catch much? I really don't know. That's why it's a strawman. :-) -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Re: Performance regression testing?
On Nov 28, 2005, at 4:38 PM, Mark Mitchell wrote: we require people to run regression tests for correctness, but that we don't really have anything equivalent for performance. My feeling is that we should have such a suite. I'd favor a micro style, where we are measuring clock cycles (on machines that can expose them x86/v9), and then we re-run the suite (from the driver) enough to be sure we are near the minimum within some probability. The minimum then becomes the result. The idea this would be a regression suite. There is the other side of the coin, and that is macro testing, but generally speaking, we don't usually put macro tests in the testsuite (gcc/testsuite). I think it would be good to collect such a suite and put it somewhere (ftp). There are such suites available, BYTEmark (http://www.byte.com/bmark/bmark.htm) is just a random one that comes to mind. Internally here, we have something called skidmarks that is a fast running, inner loop cut out of the hearth of the program type suite. If people start taking up a collection for such a suite, we might be able to donate it (or parts of it). I'd rather have someone running a performance regression tester often and not require people run the suite themselves (well, unless they are doing something like new ra, or tree-ssa). A fast regression tester seems like it should be able to produce a new turn of the crank every 30 minutes (excluding ada/java/fortran), just doing an update, incremental make and install then run the suite. This should be plenty enough resolution to spot the guilty parties. The largest complication would seem to be a person to actually collect a suite that amuses them, and care for and feed a machine to do the running of the suite.
Re: Performance regression testing?
> I was hoping that having it there when people did test runs would change > the psychology; instead of having already checked in a patch, which > we're then looking to revert, we'd be making ourselves aware of > performance impact before check-in, even for patches that we don't > expect to have performance impact. (For major new optimizations, we > already expect people to do some benchmarking.) I'd be surprised if you could get meaningful performance numbers on anything other than an dedicated performance testing machine. There are simply too many external factors on a typical development machine[*]. I'm not saying we shouldn't try to do some sort of performance testing. Just that even if we find a reasonable benchmark then adding it to "make check" probably isn't going to give much useful data. Paul [*] For example: Any kind other activity on the same machine, power management changing CPU frequency, which CPUs in an SMP machine have local memory, Which of N different machines was it run on.
Torbjorn's ieeelib.c
Back in 1999, Torbjorn Granlund posted: http://gcc.gnu.org/ml/gcc/1999-07n/msg00553.html That message contains an IEEE floating-point emulation library, like fp-bit.c. Howeve, the performance is considerably better; Joseph measured against fp-bit.c with a modern compiler, and ieeelib.c is about 10-15% better than the current code on EEMBC on a PowerPC 440. So, we're considering doing what it takes to get ieeelib.c into GCC, or, perhaps, borrowing some of its ideas for fp-bit.c. In his original message, Torbjorn indicated that Swox AB (the company of which he is CEO) donated the code, and the old copyright file had an entry for Torbjorn, though not Swox AB. I've contacted Torbjorn, and he'd still like to see ieeelib.c in GCC. Is there any copyright issue that would prevent inclusion of this code? Assuming all the technical bits were in order, would we need to get the FSF involved in any way before including the code? Thanks, -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Re: Performance regression testing?
On Tue, Nov 29, 2005 at 02:03:46AM +, Paul Brook wrote: > > I was hoping that having it there when people did test runs would change > > the psychology; instead of having already checked in a patch, which > > we're then looking to revert, we'd be making ourselves aware of > > performance impact before check-in, even for patches that we don't > > expect to have performance impact. (For major new optimizations, we > > already expect people to do some benchmarking.) > > I'd be surprised if you could get meaningful performance numbers on anything > other than an dedicated performance testing machine. There are simply too > many external factors on a typical development machine[*]. > > I'm not saying we shouldn't try to do some sort of performance testing. Just > that even if we find a reasonable benchmark then adding it to "make check" > probably isn't going to give much useful data. I think the only _feasible_ way to do this would be with cycle counting i.e. simulators, and the _usefulness_ of the available simulators for performance on today's hardware is probably too limited. -- Daniel Jacobowitz CodeSourcery, LLC
Re: Torbjorn's ieeelib.c
On Mon, Nov 28, 2005 at 06:05:34PM -0800, Mark Mitchell wrote: > Back in 1999, Torbjorn Granlund posted: > > http://gcc.gnu.org/ml/gcc/1999-07n/msg00553.html > > That message contains an IEEE floating-point emulation library, like > fp-bit.c. Howeve, the performance is considerably better; Joseph > measured against fp-bit.c with a modern compiler, and ieeelib.c is about > 10-15% better than the current code on EEMBC on a PowerPC 440. > > So, we're considering doing what it takes to get ieeelib.c into GCC, or, > perhaps, borrowing some of its ideas for fp-bit.c. > > In his original message, Torbjorn indicated that Swox AB (the company of > which he is CEO) donated the code, and the old copyright file had an > entry for Torbjorn, though not Swox AB. I've contacted Torbjorn, and > he'd still like to see ieeelib.c in GCC. Is there any copyright issue > that would prevent inclusion of this code? Assuming all the technical > bits were in order, would we need to get the FSF involved in any way > before including the code? Well, the problem is that you're raising a legal technicality, and legal technicalities are up to the FSF. Maybe they'll have no problem, especially if Swox AB basically is Torbjorn. If there is a problem, and Torbjorn is still CEO of Swox AB, it should be no problem (other than the delay) to do new paperwork, and maybe RMS would be OK with it going in now as long as we're sure that everything will be done by release time. But I do think we have to ask.
Re: Torbjorn's ieeelib.c
Joe Buck wrote: > Well, the problem is that you're raising a legal technicality, and legal > technicalities are up to the FSF. Maybe they'll have no problem, > especially if Swox AB basically is Torbjorn. If there is a problem, and > Torbjorn is still CEO of Swox AB, it should be no problem (other than the > delay) to do new paperwork, and maybe RMS would be OK with it going in > now as long as we're sure that everything will be done by release time. > > But I do think we have to ask. Shucks. Should I ask [EMAIL PROTECTED] or RMS directly or ...? -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Re: Torbjorn's ieeelib.c
> Mark Mitchell writes: Mark> In his original message, Torbjorn indicated that Swox AB (the company of Mark> which he is CEO) donated the code, and the old copyright file had an Mark> entry for Torbjorn, though not Swox AB. I've contacted Torbjorn, and Mark> he'd still like to see ieeelib.c in GCC. Is there any copyright issue Mark> that would prevent inclusion of this code? Assuming all the technical Mark> bits were in order, would we need to get the FSF involved in any way Mark> before including the code? Swox AB does have a copyright assignment on file, so GCC is free to use ieeelib.c. David
Re: Performance regression testing?
Again, that's a strawman. I'm just looking for suggestions about what we might to do -- or even feedback that there's no need to do anything. This isn't really suitable for an automated tester, but what I used to do was keep around some .i files of some version of some compiler files (I think it was reload.i, combine.i, and loop.i) along with .s files for some architectures and every so often make new .s files and compare them to see if something got better or worse. Of course, with modern machines, manually trying to decide if assembler code is better or worse isn't too practical, but it did detect if some optimizations (e.g., strength reduction) got turned off. It also forced something that all too few people do: look at generated code on a routine basis with an eye to performance. As I said, I don't see a way to automate this, but even now feel that this sort of thing would have some value.
Re: m68k exception handling
On Mon, 28 Nov 2005, Jim Wilson wrote: > The DWARF2 unwind info method has little or no overhead until a > exception is thrown. This is the preferred method for most targets. In > this scheme, we read the DWARF2 unwind info from the executable when an > exception is throw, parse the unwind tables, and then follow the > directions encoded in the unwind tables until we reach a catch handler. > This approach has obvious problems if you are using a disk-less > OS-less target board. This approach also generally requires some C > library support, which is present in glibc, but may not be present in > newlib. You can find info on this approach here > http://gcc.gnu.org/ml/gcc/2004-03/msg01779.html No, everything necessary support-wise is in gcc libraries, no special stuff from newlib is needed. Make sure to use the right gcc-provided start-files, though: besides the usual crt0.o (spelling varies), crti.o and crtn.o; gcc adds crtbegin.o and crtend.o. (You don't really read exception tables "manually" from the executable at exception time; it's linked in. You don't do that for the normal bunch of "hosted" systems either FWIW. It may be different for IA64.) brgds, H-P
Re: Torbjorn's ieeelib.c
David Edelsohn wrote: > Swox AB does have a copyright assignment on file, so GCC is free > to use ieeelib.c. Great. Thanks for double-checking! -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
port of the LLVM patch to the trunk
I had some problems when trying to use the apple branch on a gnu/linux/x86. Because of this I am trying to port the patch to the trunk. With some help from Chris I am now able to build xgcc, but a type check fails when it is run: ../../trunk-llvm/gcc/crtstuff.c:186: internal compiler error: tree check: expect ed tree_list, have integer_cst in ConvertArrayCONSTRUCTOR, at llvm-convert.cpp:2 132 The current patch can be dowloaded from: http://www.las.ic.unicamp.br/~espindola/gcc-llvm-trunk-107525.patch.bz2 Rafael pgpB9AWzHGYuY.pgp Description: PGP signature
Re: Performance regression testing?
On Nov 28, 2005, at 6:21 PM, Hans-Peter Nilsson wrote: I've attached the work-in-progress so I don't have to get into detail about what it does :-) except noting that you'll see in gcc.sum something like: PASS: csibe -O1 runtime zlib-1.1.4:minigzip not slower than best PASS: csibe -O1 runtime zlib-1.1.4:minigzip not more than .1% slower than best PASS: csibe -O1 runtime zlib-1.1.4:minigzip not more than 1% slower than best PASS: csibe -O1 runtime zlib-1.1.4:minigzip not more than 10% slower than best Hum, I'd prefer that the output format be: PERF: %f name of test then have an analysis package crunch that into the above format. This way, one can _compare_ two arbitrary runs, which is a useful property. The number can be thought of as time, such as the number of clock cycles, but we only reeally care that it starts at zero, and gets bigger as things go bad. An aside database would contain standard deviations, variances and so on, if one wanted to fold that into the comparisons.
Re: GCC-3.4.5 Release Status
Gabriel Dos Reis wrote: At the moment, we have only one bug I consider release critical for 3.4.5. middle-end/24804 Produces wrong code I put an analysis in the PR. It is a gcse store motion problem. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: unable to find a register to spill in class
Nemanja Popov wrote: .../../gcc/libgcc2.c:535: error: unable to find a register to spill in class GR_REGS Reload is one of the hardest parts to get right with an initial port. You will probably have to spend some time learning the basics of what reload does. There are many different things that could be wrong here. It is hard to provide help without some additional info about the port. The place to start is in the greg file. There should be a section near that top that says "Reloads for insn # 6". This will be followed by a description of the reloads that were generated for this insn. Also, of interest here is the movsi_general pattern, particularly its constraints. And the GR_REGS class, how many registers are in it, how many are allocatable, etc. You may need to set breakpoints in reload to debug this. Put a break point in find_reloads() conditional on the insn uid if the problem appears to be there. Also, check find_regs and figure out why it failed. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Warning bug with -fPIC? (was Re: Some testsuite cleanups (mostly for -fPIC))
Is this indeed a bug? Sounds like a bug. I just found something in the bug database relating to this: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19232 According to Andrew (#3) it doesnt eject a warning becuase the function isn't inlined. I'm not sure thats a valid reason for not ejecting the warning. Just becuase the function can be over-ridden in another TU doesn't mean that the one that is currently declared, and called with sign differnces, shouldn't eject a warning. Kean
Re: GCC-3.4.5 Release Status
Jim Wilson <[EMAIL PROTECTED]> writes: | Gabriel Dos Reis wrote: | > At the moment, we have only one bug I consider release critical for 3.4.5. | > middle-end/24804 Produces wrong code | | I put an analysis in the PR. It is a gcse store motion problem. Woaw! Thanks for the detailed analysis. It is indeed interesting that the bug is hidden in 4.0 and higher due to the current infrastructure. -- Gaby
Re: GCC-3.4.5 Release Status
Gabriel Dos Reis wrote: I would need an ia64 maintainer to comment on this. From There isn't enough ia64 maintainer bandwidth to provide detailed comments on testsuite results on old machines with old tools versions. Basically, it is only me, and I'm also trying to do a hundred other things in my free time, most of which aren't getting done. So I'm not going to care unless something important is broken, like a gcc bootstrap, glibc build, or linux kernel build. I wouldn't worry unless the results are worse than gcc-3.4.0 on the same machine. I have no such results to compare against, and can't generate them. I can find gcc-3.4.0 results for a suse system on the mailing list, but I'm not sure if that tells me anything useful. Some of these probably need an updated binutils to fix. Some of these are problems that weren't fixed until just before the gcc-4.0 release. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Java on uClinux
> There is an upsteam for beohm_gc (Boehm himself). Yes, but you usually can modify the local copy and simply CC Hans. -- Eric Botcazou
Re: Performance regression testing?
On Mon, 28 Nov 2005, Mike Stump wrote: > On Nov 28, 2005, at 6:21 PM, Hans-Peter Nilsson wrote: > > I've attached the work-in-progress so I don't have to get into > > detail about what it does :-) except noting that you'll see in > > gcc.sum something like: > > > > PASS: csibe -O1 runtime zlib-1.1.4:minigzip not slower than best > > PASS: csibe -O1 runtime zlib-1.1.4:minigzip not more than .1% > > slower than best > > PASS: csibe -O1 runtime zlib-1.1.4:minigzip not more than 1% slower > > than best > > PASS: csibe -O1 runtime zlib-1.1.4:minigzip not more than 10% > > slower than best > > Hum, I'd prefer that the output format be: > > PERF: %f name of test You seem to be interpreting the gcc.sum format, thinking it's the raw "baseline" format. Which for the record is like: ... runtime,-O1,zlib-1.1.4:minigzip,previous 0.32 runtime,-O1,bzip2-1.0.2:bzip2.d,previous 0.32 runtime,-O1,bzip2-1.0.2:bzip2recover,previous 0.19 ... That was the "native" x86_64 output. Here's for cris-linux+cris-sim: .. runtime,-O1,zlib-1.1.4:minigzip,previous 1262089345.0 runtime,-O1,bzip2-1.0.2:bzip2.d,previous 945199067.0 runtime,-O1,bzip2-1.0.2:bzip2recover,previous 1555998754.0 Can't be compared with each other, if that's what you mean (how would that make sense?) but quite comparable to other baselines methinks. I refer to the implementation for further details. > then have an analysis package crunch that into the above format. ...and emitting PASS/FAIL after _comparing_ two or more arbitrary runs according to some criteria? Like above? ;-) > This way, one can _compare_ two arbitrary runs, which is a useful > property. The number can be thought of as time, such as the number > of clock cycles, but we only reeally care that it starts at zero, and > gets bigger as things go bad. You have to elaborate here. How does "biasing" the number of cycles to make it 0 help? Do you mean a deviation normalized between 0 and 1? How would you "reset" it? I think it'd just be confusing, and you'd have to expect and handle negative numbers. Better keep an explicit built-in baseline. Already implemented by looking in gcc.performance/csibe/baselines/$target_triplet and reading whatever file is there according to the format above. ...oops, a bug in the posted code; missing "$" on $subdir. And untested anyway. brgds, H-P
Re: port of the LLVM patch to the trunk
> > --nextPart1783728.bJoWQadrrL > Content-Type: text/plain; > charset="us-ascii" > Content-Transfer-Encoding: quoted-printable > Content-Disposition: inline > > I had some problems when trying to use the apple branch on a gnu/linux/x86.= > =20 > Because of this I am trying to port the patch to the trunk. With some help= > =20 > from Chris I am now able to build xgcc, but a type check fails when it is= > =20 > run: > > ../../trunk-llvm/gcc/crtstuff.c:186: internal compiler error: tree check:= > =20 > expect > ed tree_list, have integer_cst in ConvertArrayCONSTRUCTOR, at=20 > llvm-convert.cpp:2 > 132 > > The current patch can be dowloaded from: > > http://www.las.ic.unicamp.br/~espindola/gcc-llvm-trunk-107525.patch.bz2 Of course this patch does not include the new files so nobody can help you :). -- Pinski
Re: Performance regression testing?
On Mon, 28 Nov 2005, Hans-Peter Nilsson wrote: > I've attached the work-in-progress If someone's missing the trivial sim-main-glue.c, here it is, just for completeness. Not used for "native" testing. brgds, H-P/* Glue for passing arguments to a simulator that can't pass command-line arguments to the target, yet can do file I/O. Copyright (c) 2005 Free Software Foundation, Inc. This file is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. For a copy of the GNU General Public License, write the the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ #include #include #include #include const char _argfile[] = _ARGFILE; /* We know we won't be passed more than this many/long parameters. */ static char args[1024]; static char *argv[32]; static char *envp[] = {"", NULL}; #undef main extern int _main_wrapped (int, char *[], char *[]); int main (void) { char *argp; FILE *argfile = fopen (_argfile, "r"); unsigned int i; if (argfile == NULL) abort (); for (argp = args, i = 0;; i++) { /* Need room for NULL too. */ if (i >= sizeof (argv) / sizeof (argv[0]) - 1) abort (); if (fgets (argp, sizeof (args) - (argp - args), argfile) == NULL) break; argv[i] = argp; argp += strlen (argp); argp[-1] = 0; } argv[i] = NULL; fclose (argfile); exit (_main_wrapped (i, argv, envp)); }
Re: The actual LLVM integration patch
> > > I threw the current version of the patch up here: > http://nondot.org/sabre/llvm-gcc-4.0-patch.tar.gz A couple of comments. getIntegerType is really badly. It seems better to use the mode to detect the type. Also maping 128bit fp type to {double, double} seems wrong for almost all targets except for PPC and MIPS which uses almost the same 128bit fp encoding. -- Pinski
Re: The actual LLVM integration patch
On Tue, 29 Nov 2005, Andrew Pinski wrote: I threw the current version of the patch up here: http://nondot.org/sabre/llvm-gcc-4.0-patch.tar.gz A couple of comments. getIntegerType is really badly. It seems better to use the mode to detect the type. Also maping 128bit fp type to {double, double} seems wrong for almost all targets except for PPC and MIPS which uses almost the same 128bit fp encoding. Thanks for the feedback! I will address these in the patch I actually propose for submission. -Chris -- http://nondot.org/sabre/ http://llvm.org/