>>>>> "Gabriel Genellina" <[EMAIL PROTECTED]> (GG) wrote:
>GG> En Tue, 04 Mar 2008 11:46:48 -0200, NickC <[EMAIL PROTECTED]> escribió: >>> A mildly interesting Py3k experiment: >>> >>> Python 3.0a3+ (py3k:61229, Mar 4 2008, 21:38:15) >>> [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 >>> Type "help", "copyright", "credits" or "license" for more information. >>>>>> from fractions import Fraction >>>>>> from decimal import Decimal >>>>>> def check_accuracy(num_type, max_val=1000): >>> ... wrong = 0 >>> ... for x in range(1, max_val): >>> ... for y in range(1, max_val): >>> ... wrong += (x / num_type(y)) * y != x >>> ... return wrong >>> ... >>>>>> check_accuracy(float) >>> 101502 >>>>>> check_accuracy(Decimal) >>> 310013 >>>>>> check_accuracy(Fraction) >>> 0 >>> >>> >>> The conclusions I came to based on running that experiment are: >>> - Decimal actually appears to suffer more rounding problems than float >>> for rational arithmetic >GG> Mmm, but I doubt that counting how many times the results are equal, is >GG> the right way to evaluate "accuracy". >GG> A stopped clock shows the right time twice a day; a clock that loses one >GG> minute per day shows the right time once every two years. Clearly the >GG> stopped clock is much better! But if the answer is incorrect (in the float calculation) the error is limited. IEEE 754 prescribes that the error should be at most 1 LSB, IIRC. And then the number of errors is the proper measure. -- Piet van Oostrum <[EMAIL PROTECTED]> URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4] Private email: [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list