Peter Eisentraut <[EMAIL PROTECTED]> writes:
> If the test doesn't use any library function's run-time behavior, you can
> usually do something like

> main() {
> int a[(2.0+2.0==4.0)?1:-1]
> }

> This will fail to compile if the floating-point arithmetic is broken.

However, unless gcc itself is compiled with -ffast-math, such an
approach won't show up the bug.

I had success with this test:

#include <stdio.h>

double d18000 = 18000.0;

main() {
  int d = d18000 / 3600;
  printf("18000.0 / 3600 = %d\n", d);
  return 0;
}

Using Red Hat 7.2's compiler:

[tgl@rh1 tgl]$ gcc -v
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs
gcc version 2.96 20000731 (Red Hat Linux 7.1 2.96-98)

I get:

[tgl@rh1 tgl]$ gcc bug.c
[tgl@rh1 tgl]$ ./a.out
18000.0 / 3600 = 5                      -- right
[tgl@rh1 tgl]$ gcc -ffast-math bug.c
[tgl@rh1 tgl]$ ./a.out
18000.0 / 3600 = 4                      -- wrong!

You need the dummy global variable to keep the compiler from simplifying
the division at compile time, else you get 5.  With the test as
exhibited, the -O level seems not to matter.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

Reply via email to