On 04 Dec 08, at 00:08, Bridger Maxwell wrote:
Short version of my question: I believe I am having rounding errors because I am working with really, really small values. Would it help if I multiplied these values by a scalar (say, 1,000), did math with them, and then divided them by the scalar? I remember learning how IEEE floating point numbers are stored, but I can't remember enough about it to know if this would have any effect on precision. If not, what is a good way to get better precision? I
am already using doubles instead of floats.

No; if anything, multiplying by a constant will reduce your precision slightly.

If you're having precision issues with doubles, you are probably manipulating your intermediate values in such a way as to destroy precision ("loss of significance"). A common culprit is subtracting two values of nearly equal value. Without knowing what your math looks like, it's hard to guess what might be at fault, but rearranging your float math to avoid this sort of thing may improve your results.

Goldberg's "What Every Computer Scientist Should Know About Floating Point" is a worthwhile read, and covers this issue (as well as many other pitfalls) in great detail:

http://www.engr.pitt.edu/hunsaker/3097/floatingpoint.pdf
_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to [EMAIL PROTECTED]

Reply via email to