On 2006-12-31 09:22:22 -0500, Robert Dewar wrote: > Vincent Lefevre wrote: > >>My point was that if you see this in a source program, it is in > >>fact a possible candidiate for code that can be destroyed by > >>the optimization. > > > >Well, only for non-portable code (i.e. code based on wrap). I also > >suppose that this kind of code is used only to check for overflows. > > No, you suppose wrong, this is an idiom for a normal range test. If > you have > > if (a > x && a < y) > > you can often replace this by a single test with a wrapping subtraction. > As Richard said, you should do this unsigned, but I can easily imagine > those who are accustomed to signed arithmetic wrapping not bothering
Doing that in unsigned arithmetic is much more readable anyway. So, I doubt that programmers would do that in signed arithmetic. Or do you have any real example? > >>And that's the trouble, this is an optimization which does improve > >>performance, but may destroy existing code, and the very example > >>you gave to talk about improved performance is also a nice case > >>of showing why it may destroy performance. In fact the wrap > >>around range test is a standard idiom for "hand optimization" > >>of range tests. > > > >Yes, and the lack of optimization would be even worse. > > Well that's a claim without substantiation. No one has any data that > I know of that shows that the optimization of copmarisons like this > is important. Sure you can concoct an example, but that says nothing > about real world code. I meant that the lack of optimization would be even worse in the above case only. The point is that one can write "a - 10" in order to say that a >= INT_MIN + 10 and allow some optimizations. But if, instead of doing an optimization, the compiler really generates a - 10 because of wrap, then in this case, writing "a - 10" is worse than writing code without trying to reduce the range, i.e. instead of having a faster code, one has a slower code. I'm not saying that this kind of code is common, but it is quite bad to penalize a code based on the standard (that tries to give information to the compiler in order to run faster), just because of existing non-conforming code. > This is a long thread, but it is on an important subject. I find > that compiler writers (and hardware manufacturers too) tend to > be far too focused on performance, when what is important to the > majority of users is reliability. IMHO, GCC should first focus on reliability of code based on the standard. If reliability is important, why not assuming -ffloat-store by default when need be? Without it on some platforms, it cannot be a conforming compiler (even with it, it isn't, but that's much better with it). -- Vincent Lefèvre <[EMAIL PROTECTED]> - Web: <http://www.vinc17.org/> 100% accessible validated (X)HTML - Blog: <http://www.vinc17.org/blog/> Work: CR INRIA - computer arithmetic / Arenaire project (LIP, ENS-Lyon)