Bernd Schmidt <[EMAIL PROTECTED]> writes:
> Paul Eggert wrote:
>> But so far, benchmark scores are the only scores given by the people
>> who oppose having -O2 imply -fwrapv.
>
> And you expect real-world results will be different because...?
Because of the (admittedly limited) real-world measure
[EMAIL PROTECTED] (Richard Kenner) writes:
> > But didn't this thread get started by a real program that was broken
> > by an optimization of loop invariants? Certainly I got a real bug
> > report of a real problem, which you can see here:
> >
> > http://lists.gnu.org/archive/html/bug-gnulib/200
Daniel Berlin wrote:
> I generally have no problem with turning on -fwrapv at O2, but i'm
> curious where this ends.
> After all, strict aliasing makes it hard to write a bunch of styles of
> code people really want to write, and breaks real world programs and
> GNU software.
>
> Yet we decided t
> But didn't this thread get started by a real program that was broken
> by an optimization of loop invariants? Certainly I got a real bug
> report of a real problem, which you can see here:
>
> http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
I just thought of something intere
Paul Eggert <[EMAIL PROTECTED]> writes:
> Robert Dewar <[EMAIL PROTECTED]> writes:
>
> > We have not seen ONE imaginary example, let
> > alone a real example, where the optimziation of loop invariants
> > (by far the most important optimization in the class we are
> > discussing) would break exis
> > But how would that happen here? If we constant-fold something that would
> > have overflowed by wrapping, we are ELIMINATING a signed overflow, not
> > INTRODUCING one. Or do I misunderstand what folding we're talking about
> > here?
>
> http://gcc.gnu.org/PR27116 is what led to the patch.
Paul Eggert <[EMAIL PROTECTED]> writes:
> "Daniel Berlin" <[EMAIL PROTECTED]> writes:
>
> >> http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html
> >> and
> >> http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html
> >>
> > Note the distinct drop in performance acro
[EMAIL PROTECTED] (Richard Kenner) writes:
> > > > Note that -fwrapv also _enables_ some transformations on signed
> > > > integers that are disabled otherwise. We for example constant fold
> > > > -CST for -fwrapv while we do not if signed overflow is undefined.
> > > > Would you change those?
>
> > > Note that -fwrapv also _enables_ some transformations on signed
> > > integers that are disabled otherwise. We for example constant fold
> > > -CST for -fwrapv while we do not if signed overflow is undefined.
> > > Would you change those?
> >
> > I don't understand the rationale for not wra
> http://gcc.gnu.org/ml/gcc/2006-12/msg00607.html
>
> If this doesn't count as "optimization of loop invariants"
> then what would count?
One where the induction variable was updated additively, not
multiplicatively. When we talk about normal loop optimizations,
that's what we mean. I agree tha
On 12/31/06, Daniel Berlin <[EMAIL PROTECTED]> wrote:
...
> I added -fwrapv to the Dec30 run of SPEC at
> http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html
> and
> http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html
Note the distinct drop in performance acros
> This isn't just about old code. If you're saying that old code with
> overflow checking can't be fixed (in a portable manner...), then new
> code will probably use the same tricks.
I said there's no "good" way, meaning as compact as the current tests. But
it's certainly easy to test for overfl
[EMAIL PROTECTED] (Richard Kenner) writes:
> > Note that -fwrapv also _enables_ some transformations on signed
> > integers that are disabled otherwise. We for example constant fold
> > -CST for -fwrapv while we do not if signed overflow is undefined.
> > Would you change those?
>
> I don't unde
Robert Dewar <[EMAIL PROTECTED]> writes:
> We have not seen ONE imaginary example, let
> alone a real example, where the optimziation of loop invariants
> (by far the most important optimization in the class we are
> discussing) would break existing code.
But didn't this thread get started by a r
Paul Eggert wrote:
If memory serves K&Rv1 didn't talk about overflow, yes.
My K&R V1 says in Appendix A (C Reference Manual) Section 7:
.
.
.
The handling of overflow and divide check in expression evaluation is
machine-dependent. All existing implementations of C ignore integer
overflows;
"Daniel Berlin" <[EMAIL PROTECTED]> writes:
>> http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html
>> and
>> http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html
>>
> Note the distinct drop in performance across almost all the benchmarks
> on Dec 30, including pop
On 2006-12-31 11:37:21 -0500, Richard Kenner wrote:
> Certainly. Indeed I think that's the whole point of this thread: that if
> you want to catch ALL potential optimizations opportunities given to you
> by the standard, you must assume that signed overflows are undefined.
>
> However, what's bei
On 12/31/06, Bruce Korb <[EMAIL PROTECTED]> wrote:
Daniel Berlin wrote:
>> Admittedly it's only two small tests, and it's with 4.1.1. But that's
>> two more tests than the -fwrapv naysayers have done, on
>> bread-and-butter applications like coreutils or gzip or Emacs (or GCC
>> itself, for that
On 12/31/06, Richard Guenther <[EMAIL PROTECTED]> wrote:
On 12/31/06, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> On 12/31/06, Paul Eggert <[EMAIL PROTECTED]> wrote:
> > "Steven Bosscher" <[EMAIL PROTECTED]> writes:
> >
> > > On 12/31/06, Paul Eggert <[EMAIL PROTECTED]> wrote:
> > >> Also, as I un
> Are you volunteering to audit the present cases and argue whether they
> fall in the "traditional" cases?
I'm certainly willing to *help*, but I'm sure there will be some cases
that will require discussion to get a consensus.
> Note that -fwrapv also _enables_ some transformations on signed
> i
On 12/31/06, Richard Kenner <[EMAIL PROTECTED]> wrote:
> I think this is a fragile and not very practical approach. How do
> you define these "traditional" cases?
You don't need to define the "cases" in advance. Rather, you look at
each place where you'd be making an optimization based on the
Bernd Schmidt <[EMAIL PROTECTED]> writes:
[...]
| >> You say you doubt it affects performance. Based on what? Facts
| >> please, not guesses and hand-waiving...
| > The burden of proof ought to be on the guys proposing -O2
| > optimizations that break longstanding code, not on the skeptics.
|
On 31 December 2006 18:47, Paul Eggert wrote:
> "Daniel Berlin" <[EMAIL PROTECTED]> writes:
> The question is not whether GCC should support wrapv
> semantics; it already does, if you specify -fwrapv.
> The question is merely whether wrapv should be the default
> with optimization levels -O0 thro
> I think this is a fragile and not very practical approach. How do
> you define these "traditional" cases?
You don't need to define the "cases" in advance. Rather, you look at
each place where you'd be making an optimization based on the non-existance
of overflow and use knowlege of the impor
[EMAIL PROTECTED] (Richard Kenner) writes:
> that one is a weak example because it's *much* more likely that
> the author of that code didn't even *think* about the INT_MIN case
I think this seriously underestimates the programming abilities of the
original Unix developers. But if that example d
>The question is merely whether wrapv should be the default
>with optimization levels -O0 through -O2.
Perhaps the question of where wrapv gets enabled, together with the
"middle ground" approach mentioned by Robert Dewar, could be put to the
GCC Steering Committee. (As was already propos
On 12/31/06, Robert Dewar <[EMAIL PROTECTED]> wrote:
Paul Eggert wrote:
> The question is not whether GCC should support wrapv
> semantics; it already does, if you specify -fwrapv.
> The question is merely whether wrapv should be the default
> with optimization levels -O0 through -O2.
That over
On Sun, 31 Dec 2006, Robert Dewar wrote:
> If you do it in signed expecting wrapping, then the optimization
> destroys your code. Yes, it is technically your fault, but this
> business of telling users
>
> "sorry, your code is non-standard, gcc won't handle it as you
> expect, go fix your code"
M
Bruce Korb wrote:
Changing that presumption without multiple years of -Wall warnings
is a Really, Really, Really Bad Idea.
I am still not ready to agree that this is a RRRBI for the case
of loop invariants. We have not seen ONE imaginary example, let
alone a real example, where the optimziatio
Paul Eggert wrote:
The question is not whether GCC should support wrapv
semantics; it already does, if you specify -fwrapv.
The question is merely whether wrapv should be the default
with optimization levels -O0 through -O2.
That over simplifies, because it presents things as though
there are
Paul Eggert wrote:
But so far, benchmark scores are the only scores given by the people
who oppose having -O2 imply -fwrapv.
And you expect real-world results will be different because...?
You say you doubt it affects performance. Based on what? Facts
please, not guesses and hand-waiving...
Paul Eggert <[EMAIL PROTECTED]> writes:
| > In that case, we should make the Autoconf change optional.
| > I'll propose a further patch along those lines.
|
| OK, here's that proposed patch to Autoconf. Also, this patch attempts
| to discuss the matter better in the documentation. The documenta
Daniel Berlin wrote:
>> Admittedly it's only two small tests, and it's with 4.1.1. But that's
>> two more tests than the -fwrapv naysayers have done, on
>> bread-and-butter applications like coreutils or gzip or Emacs (or GCC
>> itself, for that matter).
>
> These are not performance needing appl
"Daniel Berlin" <[EMAIL PROTECTED]> writes:
> These are not performance needing applications.
No, I chose gzip -9 and sha512sum precisely because they are
CPU-bound (integer arithmetic only). On my platform, if the
input file is cached sha512sum is approximately 300 times
slower than 'cat', and
Robert Dewar <[EMAIL PROTECTED]> writes:
[...]
| In fact K&R is much stronger than you think in terms of providing
| a precise definition of the language. Too bad people did not read it.
|
| As I said earlier in this thread, people seem to think that the
| standards committee invented something
[EMAIL PROTECTED] (Richard Kenner) writes:
| > And the idea that people were not used to thinking seriously about
| > language semantics is very odd, this book was published in 1978,
| > ten years after the algol-68 report, a year after the fortran
| > 77 standard, long after the COBOL 74 standard
> Funny you should say that, because the Ada front-end likes to do this
> transformation, rather than leaving it to the back-end. For example:
>
> turns into
>
> if ((unsigned int) ((integer) x - 10) <= 10)
The front end isn't doing this: the routine "fold" in fold-const.c is.
True, it's bein
[EMAIL PROTECTED] (Richard Kenner) writes:
[...]
| > but from other evidence it's clear that common traditional practice assumes
| > wrapv semantics.
|
| "Common traditional C" was actually yet another language that was even more
| ill-defined because it included such things as structure assignm
Robert Dewar <[EMAIL PROTECTED]> writes:
| Gerald Pfeifer wrote:
| > On Sun, 31 Dec 2006, Robert Dewar wrote:
| >> If you do it in signed expecting wrapping, then the optimization
| >> destroys your code. Yes, it is technically your fault, but this
| >> business of telling users
| >>
| >> "sorry,
Duncan Sands wrote:
The C front-end performs this transformation too. I'm not claiming that the
back-end optimizers would actually do something sensible if the front-end
didn't transform this code (in fact they don't seem too), but since the
optimal way of doing the check presumably depends on
On Sunday 31 December 2006 16:19, Richard Kenner wrote:
> > If done in unsigned, this won't lead to any optimization, as unsigned
> > arithmetic doesn't have overflows. So, if you write "a - 10" where a
> > is unsigned, the compiler can't assume anything, whereas is a is
> > signed, the compiler ca
Richard Kenner wrote:
]
Essentially, there are three choices: with -fwrapv, you must preseve wrapping
semantics and do NONE of those optimizations; with -fno-wrapv, you can do ALL
of them; in the default cause, a heuristic can be used that attempts to
balance optimization quality against breakage
> This won't break the code. But I'm saying that if the compiler assumes
> wrapping, even in some particular cases (e.g. code that *looks like*
> "overflow check"), it could miss some potential optimizations. That
> is, it is not possible to avoid breaking overflow checks *and*
> optimizing everyth
On 2006-12-31 11:01:45 -0500, Robert Dewar wrote:
> The issues are
>
> a) are these optimizations valuable? (and if so, in all cases,
>or only in practice for loop invariants?).
Even if they aren't valuable today, you don't know what will happen
in future code. So, there's another issue: is i
Vincent Lefevre wrote:
On 2006-12-31 10:08:32 -0500, Richard Kenner wrote:
Well, that's not equivalent. For instance, MPFR has many conditions
that evaluate to TRUE or FALSE on some/many implementations (mainly
because the type sizes depend on the implementation), even without
the assumption tha
Gerald Pfeifer wrote:
On Sun, 31 Dec 2006, Robert Dewar wrote:
If you do it in signed expecting wrapping, then the optimization
destroys your code. Yes, it is technically your fault, but this
business of telling users
"sorry, your code is non-standard, gcc won't handle it as you
expect, go fix
On 2006-12-31 10:19:59 -0500, Richard Kenner wrote:
> > If done in unsigned, this won't lead to any optimization, as unsigned
> > arithmetic doesn't have overflows. So, if you write "a - 10" where a
> > is unsigned, the compiler can't assume anything, whereas is a is
> > signed, the compiler can as
Vincent Lefevre wrote:
No, this isn't what I meant. The C standard doesn't assume wrapping,
so I don't either. If the compiler doesn't either, then it can do
some optimizations. Let's take a very simple example:
We perfectly understand that if the compiler does not assume
wrapping, but instead
On 2006-12-31 10:08:32 -0500, Richard Kenner wrote:
> > Well, that's not equivalent. For instance, MPFR has many conditions
> > that evaluate to TRUE or FALSE on some/many implementations (mainly
> > because the type sizes depend on the implementation), even without
> > the assumption that an overf
On 2006-12-31 10:07:44 -0500, Robert Dewar wrote:
> Vincent Lefevre wrote:
> >If done in unsigned, this won't lead to any optimization, as unsigned
> >arithmetic doesn't have overflows. So, if you write "a - 10" where a
> >is unsigned, the compiler can't assume anything, whereas is a is
> >signed,
> Well, that's not equivalent. For instance, MPFR has many conditions
> that evaluate to TRUE or FALSE on some/many implementations (mainly
> because the type sizes depend on the implementation), even without
> the assumption that an overflow cannot occur.
Can you give an example of such a conditi
> As I said earlier in this thread, people seem to think that the
> standards committee invented something new here in making overflow
> undefined, but I don't think that's the case.
I agree with that too.
However, it is also the case that between K&Rv1 and the ANSI C standard,
there was a langu
> Doing that in unsigned arithmetic is much more readable anyway.
If you're concerned about readability, you leave it as the two tests and
let the compiler worry about the optimal way to implement it.
> So I doubt that programmers would do that in signed arithmetic.
I kind of doubt that as wel
> If done in unsigned, this won't lead to any optimization, as unsigned
> arithmetic doesn't have overflows. So, if you write "a - 10" where a
> is unsigned, the compiler can't assume anything, whereas is a is
> signed, the compiler can assume that a >= INT_MIN + 10, reducing
> the range for a, and
On 2006-12-31 09:22:22 -0500, Robert Dewar wrote:
> Vincent Lefevre wrote:
> >>My point was that if you see this in a source program, it is in
> >>fact a possible candidiate for code that can be destroyed by
> >>the optimization.
> >
> >Well, only for non-portable code (i.e. code based on wrap). I
Vincent Lefevre wrote:
If done in unsigned, this won't lead to any optimization, as unsigned
arithmetic doesn't have overflows. So, if you write "a - 10" where a
is unsigned, the compiler can't assume anything, whereas is a is
signed, the compiler can assume that a >= INT_MIN + 10, reducing
the
On 2006-12-31 09:08:21 -0500, Richard Kenner wrote:
> > In fact the wrap around range test is a standard idiom for "hand
> > optimization" of range tests.
>
> It's also one that GCC uses internally, but you do it in *unsigned*
> to avoid the undefined overflow.
If done in unsigned, this won't lea
On 12/31/06, Daniel Berlin <[EMAIL PROTECTED]> wrote:
On 12/31/06, Paul Eggert <[EMAIL PROTECTED]> wrote:
> "Steven Bosscher" <[EMAIL PROTECTED]> writes:
>
> > On 12/31/06, Paul Eggert <[EMAIL PROTECTED]> wrote:
> >> Also, as I understand it this change shouldn't affect gcc's
> >> SPEC benchmark
On 12/31/06, Paul Eggert <[EMAIL PROTECTED]> wrote:
"Steven Bosscher" <[EMAIL PROTECTED]> writes:
> On 12/31/06, Paul Eggert <[EMAIL PROTECTED]> wrote:
>> Also, as I understand it this change shouldn't affect gcc's
>> SPEC benchmark scores, since they're typically done with -O3
>> or better.
>
>
On 2006-12-31 09:03:17 -0500, Richard Kenner wrote:
> > And I doubt that GCC (or any compiler) could reliably detect code
> > that checks for overflow.
>
> It doesn't need to "detect" all such code: all it needs to do is
> ensure that it doesn't BREAK such code. And that's a far easier
> condition
> I would think that it would be a common consensus position that whatever
> the outcome of this debate, the result must be that the language we are
> supposed to write in is well defined.
Right. But "supposed to write in" is not the same as "what we do to avoid
breaking legacy code"!
I see it a
Richard Kenner wrote:
And the idea that people were not used to thinking seriously about
language semantics is very odd, this book was published in 1978,
ten years after the algol-68 report, a year after the fortran
77 standard, long after the COBOL 74 standard, and a year before the
PL/1 standa
Vincent Lefevre wrote:
My point was that if you see this in a source program, it is in
fact a possible candidiate for code that can be destroyed by
the optimization.
Well, only for non-portable code (i.e. code based on wrap). I also
suppose that this kind of code is used only to check for over
On 2006-12-31 08:06:56 -0500, Robert Dewar wrote:
> Vincent Lefevre wrote:
> >On 2006-12-30 20:07:09 -0500, Robert Dewar wrote:
> >> In my view, this comparison optimization should not have been put in
> >> without justification given that it clearly does affect the semantics
> >> of real code. Ind
> In fact the wrap around range test is a standard idiom for "hand
> optimization" of range tests.
It's also one that GCC uses internally, but you do it in *unsigned* to
avoid the undefined overflow.
Richard Kenner wrote:
On the other hand, C does not have a way to tell the compiler:
"this is my loop variable, it must not be modified inside the loop"
neither you can say:
"this is the upper bound of the loop, it must not be modified"
either.
No, but the compiler can almost a
On 2006-12-31 08:25:23 -0500, Richard Kenner wrote:
> > > > I suppose there is
> > > >
> > > > *hv = (HOST_WIDE_INT) -(unsigned HOST_WIDE_INT) h1;
> > > >
> > > > to make it safe.
> > >
> > > Can't that conversion overflow?
> >
> > Not on a two's complement machine,
>
> Then I'm confuse
> And the idea that people were not used to thinking seriously about
> language semantics is very odd, this book was published in 1978,
> ten years after the algol-68 report, a year after the fortran
> 77 standard, long after the COBOL 74 standard, and a year before the
> PL/1 standard. It's not t
> And I doubt that GCC (or any compiler) could reliably detect code
> that checks for overflow.
It doesn't need to "detect" all such code: all it needs to do is
ensure that it doesn't BREAK such code. And that's a far easier
condition: you just have to avoid folding a condition into TRUE or FALSE
Richard Kenner wrote:
The burden of proof ought to be on the guys proposing -O2
optimizations that break longstanding code, not on the skeptics.
There's also a burden of proof that proposed optimizations will actually
"break longstanding code". So far, all of the examples of code shown
that as
Richard Kenner wrote:
The burden of proof ought to be on the guys proposing -O2
optimizations that break longstanding code, not on the skeptics.
There's also a burden of proof that proposed optimizations will actually
"break longstanding code". So far, all of the examples of code shown
that as
> The burden of proof ought to be on the guys proposing -O2
> optimizations that break longstanding code, not on the skeptics.
There's also a burden of proof that proposed optimizations will actually
"break longstanding code". So far, all of the examples of code shown
that assumes wrapv semantics
> On the other hand, C does not have a way to tell the compiler:
>
> "this is my loop variable, it must not be modified inside the loop"
>
> neither you can say:
>
> "this is the upper bound of the loop, it must not be modified"
>
> either.
No, but the compiler can almost always trivia
On 2006-12-30 19:29:58 -0500, Robert Dewar wrote:
> Richard Kenner wrote:
> >I can't speak for any other GCC developer, but I personally am
> >quite comfortable viewing any code that assumes wrapping semantics
> >as broken and needing fixing with the exception of these cases of
> >checking for over
On 12/31/06, Duncan Sands <[EMAIL PROTECTED]> wrote:
> > for this specific function (vrp_int_const_binop), I'm issuing a
> > warning inside the else-if branch that tests for the overflowed
> > result. I'm unclear why that is a false positive since the result is
> > known to overflow. Could you
On 2006-12-30 20:07:09 -0500, Robert Dewar wrote:
> In my view, this comparison optimization should not have been put in
> without justification given that it clearly does affect the semantics
> of real code. Indeed if you really see code like
>
>if (a - 10 < 20)
>
> in place of
>
>i
Hello,
> >I have been looking into infer_loop_bounds_from_signedness() called
> >from infer_loop_bounds_from_undefined().
> >At some places, nowrap_type_p() is used but this function operates
> >only on types, so there will be too many false positive there; yet we
> >will miss warning through that
On 31 Dec 2006 00:40:39 +0100, Gabriel Dos Reis
<[EMAIL PROTECTED]> wrote:
"Richard Guenther" <[EMAIL PROTECTED]> writes:
| On 31 Dec 2006 00:10:23 +0100, Gabriel Dos Reis
| <[EMAIL PROTECTED]> wrote:
| > "Richard Guenther" <[EMAIL PROTECTED]> writes:
| >
| > | On 30 Dec 2006 23:55:46 +0100, Gab
Robert Dewar wrote:
Andrew Pinski wrote:
-fwrapv-in-all-cases-except-loop-bounds
Again, please don't this the default for Fortran as integer
overflow has been undefined since at least 1977 so I don't think
it is a good idea for GCC in general anyways as evidence of Fortran.
-- Pinski
Well
Vincent Lefevre wrote:
On 2006-12-30 20:07:09 -0500, Robert Dewar wrote:
In my view, this comparison optimization should not have been put in
without justification given that it clearly does affect the semantics
of real code. Indeed if you really see code like
if (a - 10 < 20)
in place
> > for this specific function (vrp_int_const_binop), I'm issuing a
> > warning inside the else-if branch that tests for the overflowed
> > result. I'm unclear why that is a false positive since the result is
> > known to overflow. Could you elaborate?
>
> Well, we use that function to do arith
On 12/31/06, Paul Eggert <[EMAIL PROTECTED]> wrote:
Also, as I understand it this change shouldn't affect gcc's
SPEC benchmark scores, since they're typically done with -O3
or better.
It's not all about benchmark scores. I think most users compile at
-O2 and they also won't understand why they
> | though I vaguely
> | recall some complaints that you couldn't build v7 Unix if your compiler
> | generated integer overflow traps.
>
> this matches what I've told recently by some people who worked at bell
> labs, in the unix room.
I have
Richard Kenner wrote:
Wait, though: K&Rv2 is post-C89.
Not completely: it's published in 1988, but the cover says "based on
draft-proposed ANSI C".
Naturally K&Rv2 documents this, but if you want to know about
traditional practice the relevant wording should come from K&Rv1,
not v2.
I don't
> > > I suppose there is
> > >
> > > *hv = (HOST_WIDE_INT) -(unsigned HOST_WIDE_INT) h1;
> > >
> > > to make it safe.
> >
> > Can't that conversion overflow?
>
> Not on a two's complement machine,
Then I'm confused about C's arithmetic rules. Suppose h1 is 1. It's cast
to unsigned, so
> Wait, though: K&Rv2 is post-C89.
Not completely: it's published in 1988, but the cover says "based on
draft-proposed ANSI C".
> Naturally K&Rv2 documents this, but if you want to know about
> traditional practice the relevant wording should come from K&Rv1,
> not v2.
>
> I don't know what K&R
"Richard Guenther" <[EMAIL PROTECTED]> writes:
[...]
| Yes, I have some patches in the queue to clean this up (and add some
| more stuff to VRP).
Great!
-- Gaby
On 31 Dec 2006 12:42:57 +0100, Gabriel Dos Reis
<[EMAIL PROTECTED]> wrote:
"Richard Guenther" <[EMAIL PROTECTED]> writes:
| On 12/31/06, Richard Kenner <[EMAIL PROTECTED]> wrote:
| > > What would you suggest this function to do, based on your comments?
| >
| > I'm not familiar enough with VRP to
"Richard Guenther" <[EMAIL PROTECTED]> writes:
| On 12/31/06, Richard Kenner <[EMAIL PROTECTED]> wrote:
| > > What would you suggest this function to do, based on your comments?
| >
| > I'm not familiar enough with VRP to answer at that level, but at a higher
| > level, what I'd advocate is that t
On 12/31/06, Richard Kenner <[EMAIL PROTECTED]> wrote:
> What would you suggest this function to do, based on your comments?
I'm not familiar enough with VRP to answer at that level, but at a higher
level, what I'd advocate is that the *generator* of information would track
things both ways, ass
Bernd Schmidt <[EMAIL PROTECTED]> writes:
> I must admit I don't know what an integer with padding bits would look
> like. Can someone check what the C standard has to say about the bit-not
> operator?
All arithmethic operations operate only on the value bits and ignore the
padding bits.
Andrea
"Steven Bosscher" <[EMAIL PROTECTED]> writes:
> On 12/31/06, Paul Eggert <[EMAIL PROTECTED]> wrote:
>> Also, as I understand it this change shouldn't affect gcc's
>> SPEC benchmark scores, since they're typically done with -O3
>> or better.
>
> It's not all about benchmark scores.
But so far, ben
Joe Buck <[EMAIL PROTECTED]> writes:
[...]
| though I vaguely
| recall some complaints that you couldn't build v7 Unix if your compiler
| generated integer overflow traps.
this matches what I've told recently by some people who worked at bell
Paul Eggert <[EMAIL PROTECTED]> writes:
| [EMAIL PROTECTED] (Richard Kenner) writes:
|
| > I found my copy of K&R (Second Edition)
|
| Robert Dewar <[EMAIL PROTECTED]> writes:
|
| > so in fact the new C standard has changed
| > nothing from a definitional point of view,
|
| Wait, though: K
Ian Lance Taylor <[EMAIL PROTECTED]> writes:
| Gabriel Dos Reis <[EMAIL PROTECTED]> writes:
|
| > for this specific function (vrp_int_const_binop), I'm issuing a
| > warning inside the else-if branch that tests for the overflowed
| > result. I'm unclear why that is a false positive since the re
95 matches
Mail list logo