https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111655
Bug ID: 111655
Summary: wrong code generated for __builtin_signbit on x86-64
-O2
Product: gcc
Version: 13.2.1
Status: UNCONFIRMED
Severity: normal
Priority: P3
Component: tree-optimization
Assignee: unassigned at gcc dot gnu.org
Reporter: eggert at cs dot ucla.edu
Target Milestone: ---
I ran into this bug when testing Gnulib code on Fedora 38 x86-64, which uses
gcc (GCC) 13.2.1 20230728 (Red Hat 13.2.1-1). The problem is a regression from
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), which does the right thing.
Here is a stripped-down version of the bug. Compile and run the following code
with "gcc -O2 t.i; ./a.out".
int
main ()
{
double x = 0.0 / 0.0;
if (!__builtin_signbit (x))
x = -x;
return !__builtin_signbit (x);
}
Although a.out's exit status should be 0, it is 1. If I compile without -O2 the
bug goes away.
Here's the key part of the generated assembly language:
main:
pxor %xmm0, %xmm0
divsd %xmm0, %xmm0
xorpd .LC1(%rip), %xmm0
movmskpd %xmm0, %eax
testb $1, %al
sete %al
movzbl %al, %eax
ret
.LC1:
.long 0
.long -2147483648
On the x86-64, the "divsd %xmm0, %xmm0" instruction that implements 0.0 / 0.0
generates a NaN with the sign bit set. I determined this by testing on a Xeon
W-1350, although I don't see where the NaN's sign bit is documented by Intel in
this situation.
It appears that GCC's optimization incorrectly assumes that 0.0 / 0.0 generates
a NaN with the sign bit clear, which causes the "if (!__builtin_signbit (x)) x
= -x;" to be compiled as if it were merely "x = -x;", which is obviously
incorrect.