https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79036

Martin Sebor <msebor at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2017-01-09
          Component|tree-optimization           |testsuite
           Assignee|unassigned at gcc dot gnu.org      |msebor at gcc dot 
gnu.org
     Ever confirmed|0                           |1

--- Comment #1 from Martin Sebor <msebor at gcc dot gnu.org> ---
I can confirm the test failure on powerpc64 but not on x86_64.

The failure is in a test case that tries to verify that formatting the long
double value 0.1L results in between 3 to 8 bytes.

The gcc-ssa-sprintf pass determines the range of bytes by formatting GCC's
representation of 0.1L by calling the mpfr_snprintf function twice, one with
rounding down and again with rounding up, and using the results as the bounds
for the range.

Due to the difference in precision between different targets, GCC's internal
representation of 0.1L is 1.000000...1e-1 on x86_64 but 9.999999...6e-2 on
powerpc64, which formats as "0.1" and "0.100001" on x86_64 (rounded down or up,
respectively), but as "0.0999999" and "0.1" on powerpc64 (again, rounded down
or up, respectively).  This is also how Glibc formats the values.

So the bug is in the test suite assuming the x86_64 format with lower precision
and not being prepared to handle the powerpc64 format with greater precision.  
 (I'm also not sure if it qualifies as a regression but since I have a simple
testsuite-only fix that's probably not important.)

Reply via email to