On Feb 9, 2009, at 6:30 PM, Michael Ash wrote:

Not really sure what you mean by this. It's true that a constant such
as 11.2f is a less precise representation of 11.2 than the non-float
version. But aside from actually defining the constants (and note that
all exact integers under 2^23 and many common fractional values will
be *exactly* represented and thus suffer no precision loss) there's no
trouble to be had.


Actually, there is trouble, for small values of "trouble". Try running this code in GDB:

int main(int argc, char **argv)
{
double bigDouble = 20.0356412345678901234567890123456789012345678901234567890;
        double test1 = bigDouble / 2.345678;
        double test2 = bigDouble / 2.345678f;
        double test3 = bigDouble / 2.345678;
        
        return 0;
}

When I try this, test1 comes out to about 8.5415138968638882, whereas test2 comes out to 8.541513565318839. And just to eliminate one other possibility, test3 is identical to test1. Apparently test2 lost precision by being divided by a float constant. I've been unable to Google why this is happening, so I can only guess that either GCC is downscan-converting the results of the division by the float constant, or GDB is playing games with me.

I discovered this a while back while working on a brand new 32/64-bit hybrid project that did a lot of floating point math, that I ended up scrapping. Originally I did all the math using float constants, then abandoned that for double constants and -fsingle-precision-constant when I realized some precision was being lost...

Nick Zitzmann
<http://www.chronosnet.com/>

_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to