On 15/04/2016 03:38, Christian Gollwitzer wrote:
Am 15.04.16 um 02:36 schrieb Dennis Lee Bieber:
I should also have said that the square root of integer squares with
between 15 and 30 decimal digits will only be correct if the square
numbers themselves are exactly representable in 53 bits.  So we can
expect failures for squares with 16 or more digits.

However, if a number with 31 or less digits is known to be the square of
an integer, the IEEE754 sqrt function will (I believe) give the correct
result.

How could it EXCEPT by having ~15 significant digits and an exponent -- since that is all the data that is provided by a double precision floating
point. That is, for example,

1000000000000000.0 * 1000000000000000.0
1e+30
import math
math.sqrt(1e30)
1000000000000000.0


only has ONE significant digit -- even though it has thirty 0s before the
decimal point.

As I was taught in school and university the number of significant digits are the number of digits written after the point, be that decinal, binary or any other base. But away from the world of mathematics and strict engineering we often refer to the number of significant digits as the number of digits that have significance in the value of the quantity being represented. eg a temperature or voltage. For example we might say that a 10 bit analogue to digital converter has, in binary, 10 significant digits and in decimal approximately 3. What can be confusing here is that if readings from such a converter are writen down as whole numbers with a decimal point
    1.0, 2.0 ..... 1023.0
then the readings are /shown /with 1 significant digit.
We could equally write them
    1.00, 2.00 ..... 1023.00
and they are now shown to 2 significant digits. Of course these trailing 0s are of no practical 'significance' as the ADC cannot affect them. Indeed it would not even output them. So context is everything. Care needs to be taken in understanding the use of this term by writers and readers that the context is clear, are binary or decimal digits being refered to, are we using 'significant' digits in the strict mathematical/engineering sense or a more colloquial sense.
Then again we might write 1023.0 in an exponent form
0ยท1023.10^4    ( 0 point 1023 times 10 to the power 4)
in which case we would say the reading is shown to 4 significant digits

No, you need to count the significant digits in binary. 1e30 is not an even number in binary floating point.
How so? Whether or not an integer is odd or even has nothing to do with its representation in binary of decimal, floating point or integer. If you mean the mantissa is odd that may well be true. But the exponent may also shift the mantissa left by on1 place or more. In which case the value the floating point number is holding must be even. (it is multiplied by 2^n where n >= 1)
My Python 2.7.9 does this
>>> "%50.10f"%1e30
'        1000000000000000019884624838656.0000000000'
>>>
This indicates that 1e30 is not representable in Python's floating point representation.

John
Apfelkiste:AsynCA chris$ bc
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
obase=2
10^30
11001001111100101100100111001101000001000110011101001110110111101010\
01000000000000000000000000000000
sqrt(10^30)
11100011010111111010100100110001101000000000000000

Still, the floating point arithmetic should round to the correct answer if both are converted to decimal.

    Christian



--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to