Chris Angelico <ros...@gmail.com>: > On Wed, Jul 20, 2016 at 11:54 PM, Marko Rauhamaa <ma...@pacujo.net> wrote: >> 2. Floating-point numbers are *imperfect approximations* of real >> numbers. Even when real numbers are derived exactly, >> floating-point operations may introduce "lossy compression >> artifacts" that have to be compensated for in application >> programs. > > This is the kind of black FUD that has to be fought off. What > "compression artifacts" are introduced? The *only* lossiness in IEEE > binary floating-point arithmetic is rounding.
You are joining me in spreading the FUD. Yes, the immediate lossiness is rounding, but the effects of that rounding can result in atrocious accumulative errors in numeric calculations. > Unless you are working with numbers that require more precision than > you have available, the result should be perfectly accurate. Whoa, hold it there! Catastrophic cancellation (<URL: https://en.wikipedia.org/wiki/Loss_of_significance>) is not a myth: >>> 0.2 / (0.2 - 0.1) 2.0 >>> 0.2 / ((2e15 + 0.2) - (2e15 + 0.1)) 0.8 You can fall victim to the phenomenon when you collect statistics over a long time. The cumulative sum of a measurement can grow very large, which causes the naïve per-second rate calculation to become increasingly bogus. Marko -- https://mail.python.org/mailman/listinfo/python-list