"Carsten Haese" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] | On Tue, 2007-01-09 at 11:38 +0000, Nick Maclaren wrote: | > As Dan Bishop says, probably not. The introduction to the decimal | > module makes exaggerated claims of accuracy, amounting to propaganda. | > It is numerically no better than binary, and has some advantages | > and some disadvantages. | | Please elaborate. Which exaggerated claims are made, and how is decimal | no better than binary?
As to the latter question: calculating with decimals instead of binaries eliminates conversion errors introduced when one has *exact* decimal inputs, such as in financial calculations (which were the motivating use case for the decimal module). But it does not eliminate errors inherent in approximating reals with (a limited set of) ratrionals. Nor does it eliminate errors inherent in approximation algorithms (such as using a finite number of terms of an infinite series. Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list