On Fri, 15 Apr 2016 12:05:18 +0100, John Pote wrote: > On 15/04/2016 03:38, Christian Gollwitzer wrote: >> Am 15.04.16 um 02:36 schrieb Dennis Lee Bieber: >>>>> I should also have said that the square root of integer squares with >>>>> between 15 and 30 decimal digits will only be correct if the square >>>>> numbers themselves are exactly representable in 53 bits. So we can >>>>> expect failures for squares with 16 or more digits. >>>> >>>> However, if a number with 31 or less digits is known to be the square >>>> of an integer, the IEEE754 sqrt function will (I believe) give the >>>> correct result. >>> >>> How could it EXCEPT by having ~15 significant digits and an >>> exponent -- >>> since that is all the data that is provided by a double precision >>> floating point. That is, for example, >>> >>>>>> 1000000000000000.0 * 1000000000000000.0 >>> 1e+30 >>>>>> import math math.sqrt(1e30) >>> 1000000000000000.0 >>>>>> >>>>>> >>> only has ONE significant digit -- even though it has thirty 0s before >>> the decimal point. >> > As I was taught in school and university the number of significant > digits are the number of digits written after the point, be that > decinal, binary or any other base.
Then I would complain to your education regulator Significant digits are the 1st X digits in the number (with rounding) after converting to standard form. examples to 3 significant digits Number Standard form 3 Significat digits 10 1.0e1 1.00e1 = 10.0 100 1.0e2 1.00e2 = 100 3.14159 3.14159e0 3.14e0 = 3.14 0.01777 1.777e-2 1.78e-2 = 0.0178 -- Take your work seriously but never take yourself seriously; and do not take what happens either to yourself or your work seriously. -- Booth Tarkington -- https://mail.python.org/mailman/listinfo/python-list