Hello A.Z

As far as I know, the difference in output that you observed between Java and C/C++ is not caused by a difference in floating-point computation, but in a difference in the way numbers are rounded by the "printf" command. Java has a policy of showing all significant digits, while C "printf" has a policy of rounding before printing. In my opinion, this is more dangerous because it hides what really happen in floating-point computation.

The following statement is not entirely true when using finite floating point precision:

(…snip…) It is a mathematical fact that, for
consistent, necessary and even fast term, 10% of 10% must
always precisely be 1%, and by no means anything else.

Above statement can be true only when using base 10 or some other bases. We could also said "It is a mathematical fact that 1/3 of 1/3 must always precisely be 1/9 and nothing else", but it can not be represented fully accurately with base 10. It can be represented fully accurately with base 3 however. There will always be examples that work in one base and not in another, and natural laws has no preference for base 10. I understand that base 10 is special for financial applications, but for many other applications (scientific, engineering, rendering…) base 2 is as good as any other base. I would even argue that base 10 can be dangerous because it gives a false sense of accuracy: it gives the illusion that rounding errors do not happen when testing with a few sample values in base 10 (like 10% of 10%), while in reality rounding errors continue to exist in the general case.

    Martin


Reply via email to