On 2/17/23 5:27 AM, Stephen Tucker wrote:
Thanks, one and all, for your reponses.

This is a hugely controversial claim, I know, but I would consider this
behaviour to be a serious deficiency in the IEEE standard.

Consider an integer N consisting of a finitely-long string of digits in
base 10.

Consider the infinitely-precise cube root of N (yes I know that it could
never be computed unless N is the cube of an integer, but this is a
mathematical argument, not a computational one), also in base 10. Let's
call it RootN.

Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

The key factor here is IEEE floating point is storing numbers in BINARY, not DECIMAL, so a multiply by 1000 will change the representation of the number, and thus the possible resolution errors.

Store you numbers in IEEE DECIMAL floating point, and the variations by multiplying by powers of 10 go away.


The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in
RootN.

No, since the floating point number is stored as a fraction times a power of 2, the fraction has changed as well as the power of 2.


None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.

Only if the storage format was DECIMAL.


I rest my case.

Perhaps this observation should be brought to the attention of the IEEE. I
would like to know their response to it.

That is why they have developed the Decimal Floating point format, to handle people with those sorts of problems.

They just aren't common enough for many things to have adopted the use of it.


Stephen Tucker.

--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to