On 9 May 2005 11:06:22 -0700, "Dan Bishop" <[EMAIL PROTECTED]> wrote:
>Skip Montanaro wrote:
>> I understand why the repr() of float("95.895") is
>"95.8949996".
>> What I don't understand is why if I multiply the best approximation
>to
>> 95.895 that the machine has by 1 I magically se
[Dan]
>Dan> The floating-point representation of 95.895 is exactly
>Dan> 6748010722917089 * 2**-46.
[Skip Montanaro]
> I seem to recall seeing some way to extract/calculate fp representation from
> Python but can't find it now. I didn't see anything obvious in the
> distribution.
For Da
>> Why isn't the last result "958949.996"? IOW, how'd I get
>> back the lost bits?
Dan> You were just lucky.
Thanks for the response (and to Tim as well).
Dan> The floating-point representation of 95.895 is exactly
Dan> 6748010722917089 * 2**-46.
I seem to recall se
Skip Montanaro wrote:
> I understand why the repr() of float("95.895") is
"95.8949996".
> What I don't understand is why if I multiply the best approximation
to
> 95.895 that the machine has by 1 I magically seem to get the lost
> precision back. To wit:
>
> % python
> Python 2
[Skip Montanaro]
> I understand why the repr() of float("95.895") is "95.8949996".
> What I don't understand is why if I multiply the best approximation to
> 95.895 that the machine has by 1 I magically seem to get the lost
> precision back. To wit:
>
>% python
>Python 2.3.4 (#
I understand why the repr() of float("95.895") is "95.8949996".
What I don't understand is why if I multiply the best approximation to
95.895 that the machine has by 1 I magically seem to get the lost
precision back. To wit:
% python
Python 2.3.4 (#12, Jul 2 2004, 09:48:10)