Tim Peters <t...@python.org> added the comment:
Not that it matters: "ulp" is a measure of absolute error, but the script is computing some notion of relative error and _calling_ that "ulp". It can understate the true ulp error by up to a factor of 2 (the "wobble" of base 2 fp). Staying away from denorms, this is an easy way to get one ulp with respect to a specific 754 double: def ulp(x): import math mant, exp = math.frexp(x) return math.ldexp(0.5, exp - 52) Then, e.g., >>> x 1.9999999999999991 >>> y 1.9999999999999996 >>> y - x 4.440892098500626e-16 >>> oneulp = ulp(x) >>> oneulp # the same as sys.float_info.epsilon for this x 2.220446049250313e-16 >>> (y - x) / oneulp 2.0 which is the true absolute error of y wrt x. >>> x + 2 * oneulp == y True But: >>> (y - x) / x 2.220446049250314e-16 >>> _ / oneulp 1.0000000000000004 understates the true ulp error by nearly a factor of 2, while the mathematically (but not numerically) equivalent spelling used in the script: >>> (y/x - 1.0) / oneulp 1.0 understates it by exactly a factor of 2. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue34376> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com