Oh, and a further thought...
On Sat, 16 Jul 2016 04:53 pm, Random832 wrote: > Eliminate both of them. Move to a single abstract numeric type* a la > Scheme, with an "inexact" attribute (inexact numbers may or may not be > represented by a float, or by the same bigint/decimal/rational types as > exact ones with a flag set to mark them as inexact.) But that's *wrong*. Numbers are never inexact. (You can have interval arithmetic using "fuzzy numbers", but they're ALWAYS inexact.) It is calculations which are exact or inexact, not numbers. There's no a priori reason to expect that 0.499999 is "inexact" while 0.5 is "exact", you need to know the calculation that generated it: py> from decimal import * py> getcontext().prec = 6 py> Decimal("9.000002")/6 # inexact Decimal('1.50000') py> Decimal(1)/2 - Decimal('1e-6') # exact Decimal('0.499999') It seems to me that unless you're prepared to actually do some sort of error tracking, just having an "inexact/exact" flag is pretty useless. If I perform a series of calculations, and get 1.0 as the result, and the inexact flag is set, that doesn't mean that 1.0 is the wrong answer -- it may be that the errors have cancelled and 1.0 is the exact answer. Or it may be that the error bound is 1.0 ± 10000, and the calculation has diverged so far from the correct result that it is useless. -- Steven “Cheer up,” they said, “things could be worse.” So I cheered up, and sure enough, things got worse. -- https://mail.python.org/mailman/listinfo/python-list