Robert Kern wrote: > In the former case, you can claim that decimal floating point is more > accurate > *for those problems*. But as soon as you have a division operation, decimal > floating point has the same accuracy problems as binary floating point.
True. Poor choice of words on my part. No matter what representation one chooses for numbers, we can remember that digits != precision. That's why significant digits were drilled into our heads in physics! That's the reason IEEE actually works out for most things that we need floating point for. -- http://mail.python.org/mailman/listinfo/python-list