When compiling with -O or greater optimization, and if x > INT_MAX, then code like this: uint64_t y = (uint64_t)round(x); assigns the wrong value to y (the top 32 bits are all 1s). But this code assigns the right value to z: double dz = round(x); uint64_t z = dz;
It almost seems as if gcc -O in some cases compiles using a built-in declaration of round() that returns a 32-bit int. (1) GCC VERSION gavia% gcc -v Using built-in specs. Target: i386-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-libgcj-multifile --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=i386-redhat-linux Thread model: posix gcc version 4.1.1 20060525 (Red Hat 4.1.1-1) This is on Fedora Core 5. Same problem occurs with gcc 4.0.2 on FC4. (2) COMMAND LINE, COMPILER OUTPUT, AND PROGRAM OUTPUT gavia% gcc -Wall -std=c99 -O -save-temps -o x-opt x.c -lm gavia% ./x-opt x: 2147483648 y: 18446744071562067968 z: 2147483648 (3) C SOURCE #include <stdio.h> #include <math.h> #include <inttypes.h> int main(int c, char **v) { uint64_t x = 2147483648ULL; /* INT_MAX+1 */ printf("x: %llu\n", x); uint64_t y = (uint64_t)round(x); printf("y: %llu\n", y); double dz = round(x); uint64_t z = dz; printf("z: %llu\n", z); return 0; } (4) PREPROCESSED FILE Included as attachment. -- Summary: with -O, casting result of round(x) to uint64_t produces wrong values for x > INT_MAX Product: gcc Version: 4.1.1 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c AssignedTo: unassigned at gcc dot gnu dot org ReportedBy: maxp at alum dot mit dot edu GCC build triplet: i386-redhat-linux GCC host triplet: i386-redhat-linux GCC target triplet: i386-redhat-linux http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28473