On Thu, Apr 27, 2023 at 07:18:52AM +0000, Richard Biener wrote:
> Humm.  Is it worth the trouble?  I think if we make use of this it needs

I think so.  Without that, frange is half blind, using at least most common
libm functions in floating point code is extremely common and without
knowing anything about what those functions can or can't return frange will
be mostly VARYING.  And simply assuming all libm implementations are perfect
0.5ulps precise for all inputs would be very risky when we know it clearly
is not the case.

Of course, by improving frange further, we run more and more into the
already reported and just tiny bit worked around bug on the optimized away
floating point exception generating statements and we need to decide what to
do for that case..

> to be with -funsafe-math-optimizations (or a new switch?).  I'll note

Why?  If we know or can reasonably assume that say on the boundary values
some function is always precise (say sqrt always in [-0.,+Inf] U NAN,
sin/cos always in [-1.,1.] U NAN etc., that isn't an unsafe math
optimization to assume it is the case.  If we know it is a few ulps away
from that, we can just widen the range, if we don't know anything or the
function implementation is uselessly buggy, we can punt.
Whether something is a known math library function or just some floating
point arithmetics which we already handle in 13.1 shouldn't make much
difference.

> Should we, when simplifying say
> 
>   x = sin (y);
>   if (x <= 1.)
> 
> simplify it to
> 
>   x = sin (y);
>   x = min (x, 1.);
> 
> for extra safety?

Why?  If we don't know anything about y, x could be NAN, so we can't fold
it, but if we know it will not be NAN, it is always true and we are there
back to the exceptions case (plus errno but that makes the function
non-const, doesn't it?).

> That said - what kind of code do we expect to optimize when producing
> ranges for math function [operands]?  Isn't it just defensive programming
> that we'd "undo"?  Are there any missed-optimization PRs around this?

I strongly doubt real-world code has such defensive programming checks.
The intent isn't to optimize those away, but generally propagate range
information, such that we say know that sqrt result isn't negative (except
for possible -0. or negative NaN), when you add sin(x)^2+cos(y)^2 it will be
never > 2. etc.
It can then e.g. help with expansion of other possibly error generating
functions, e.g. where cdce transforms library function calls into inline
fast hw instruction vs. slow libm function for error cases; if we can prove
those error cases will never happen or will always happen, we can create
smaller/faster code.

        Jakub

Reply via email to