https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124288

--- Comment #7 from rguenther at suse dot de <rguenther at suse dot de> ---
On Mon, 2 Mar 2026, jakub at gcc dot gnu.org wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124288
> 
> --- Comment #6 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
> (In reply to Richard Biener from comment #2)
> > it seems the code computes fltmax in odd ways and we run into saturation,
> > possibly getting undefined float to unsigned int converts (overflow)?
> > Possibly the test should use __FLT_MAX__ and friends instead of that
> > weird computation.
> 
> __FLT_MAX__ is something completely different.
> The test attempts to find a largest representable floating point value smaller
> or equal than max (and smallest representable floating point value greater or
> equal than min).
> The problem was that I was relying on
> (fltmax = vf2 - vf) == vf2
> where fltmax and vf2 are volatile vars to actually store the subtraction and
> then read it from the var again, but that is not how we actually gimplify it,
> we gimplify it as
> float tmp = vf2 - vf; fltmax = tmp; tmp == vf2
> and in that case it has the undesirable excess precision behavior.
> 
> So, I'd go with

I verified this works - care to push it?

Reply via email to