On 02/09/2012 03:55 PM, James Courtier-Dutton wrote:

> Results for x86_64
> gcc -g -O0 -c -o sincos1.o sincos1.c
> gcc -static -g -o sincos1 sincos1.o -lm
> 
> ./sincos1
> sin = -8.52200849767188795e-01    (uses xmm register intructions)
> sinl = 0.46261304076460176          (uses fprem and fsin)
> sincos = 4.62613040764601746e-01 (uses fprem and fsin)
> sincosl = 0.46261304076460176       (uses fprem and fsin)
> 
> Only sin() gets an accurate answer.
> 
> Results when compiled for 32bit x86.
> gcc -m32 -g -O0 -c -o sincos1.o sincos1.c
> gcc -m32 -static -g -o sincos1 sincos1.o -lm
> 
> ./sincos1
> sin = 4.62613040764601746e-01
> sinl = 0.46261304076460176
> sincos = 4.62613040764601746e-01
> sincosl = 0.46261304076460176
> 
> Which are all inaccurate.
> 
> So, we have a case of the same program compiled for 32bit might give
> different floating point results than the same source compiled for
> 64bit.
> 
> From what I can tell, the xmm register instructions on x86_64 are
> using 128bit precision which probably explains why the result is more
> accurate.

That's not the reason.  The reason is that x86_64 is using the
IBM Accurate Mathematical Library.  Have a look.  It's in s_sin.c.

Andrew.

Reply via email to