On Fri, Dec 11, 2020 at 03:54:44PM +0800, Xionghu Luo wrote:
> +cc.
> 
> 
> On 2020/12/11 14:25, Xionghu Luo via Gcc wrote:
> >Thanks,
> >
> >On 2020/12/10 17:12, Richard Biener wrote:
> >>>2) From PR90070:
> >>>
> >>>    double temp1 = (double)r->red;
> >>>    double temp2 = (double)aggregate.red;
> >>>    double temp3 = temp2 + (temp1 * 5.0);
> >>temp1 * 5 could be not representable in float but the
> >>result of the add could so the transform could result
> >>in -+Inf where the original computation was fine (but
> >>still very large result).

Since both "red"s here are unsigned 16-bit integers, the result fits in
an unsigned 19-bit integer, so both SP float and integer calculations
woould work just fine here.

> >>Usually in such cases one could say we should implement some
> >>diagnostic hints to the user that he might consider refactoring
> >>his code to use float computations because we cannot really say
> >>whether it's safe (we do at the moment not implement value-range
> >>propagation for floating point types).
> >>
> >
> >    foo (double x, float y, float z)
> >   {
> >       return ( fabs (x) * y - z ) ;
> >   }

Since x is double, you have to do this all in double precision, even with
-ffast-math!  The subtraction could do catastrophic cancellation.

> >But the add/sub could also produces INF similarly,
> >
> >   foo (double x, float y, float z)
> >   {
> >      return ( -fabs (x) + y + z ) ;
> >   }

Here you can lose on the order of just a single bit, so it is okay to do
all this in single precision with -ffast-math.

> >Note that the add/sub sequence is different for (3) and (4) since
> >-funsafe-math-optimizations is implicitly true.  "fp-contract=fast" in
> >(1) and (2) could avoid Inf as fmads could handle float overflow (verified
> >it on Power, not sure other targets support this), but without float
> >value-range info, it is unsafe to change computation from double to
> >float even for only add/sub expressions.

Yeah exactly.


Segher

Reply via email to