On 2009-06-25 13:41, Michael Torrie wrote:

If you want accurate math, check out other types like what is in the
decimal module:

import decimal
a=decimal.Decimal('3.2')
print a * 3
9.6

I wish people would stop representing decimal floating point arithmetic as "more accurate" than binary floating point arithmetic. It isn't. Decimal floating point arithmetic does have an extremely useful niche: where the inputs have finite decimal representations and either the only operations are addition, subtraction and multiplication (e.g. many accounting problems) OR there are conventional rounding modes to follow (e.g. most of the other accounting problems).

In the former case, you can claim that decimal floating point is more accurate *for those problems*. But as soon as you have a division operation, decimal floating point has the same accuracy problems as binary floating point.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to