Re: on floating-point numbers

2021-09-05 Thread Peter J. Holzer
On 2021-09-04 09:48:40 -0300, Hope Rouselle wrote:
> Christian Gollwitzer  writes:
> > Am 02.09.21 um 15:51 schrieb Hope Rouselle:
> > ls = [7.23, 8.41, 6.15, 2.31, 7.73, 7.77]
> > sum(ls)
> >> 39.594
> >> 
> > ls = [8.41, 6.15, 2.31, 7.73, 7.77, 7.23]
> > sum(ls)
> >> 39.61
> >> All I did was to take the first number, 7.23, and move it to the
> >> last
> >> position in the list.  (So we have a violation of the commutativity of
> >> addition.)
> >
> > I believe it is not commutativity, but associativity, that is
> > violated. 

I agree.


> Shall we take this seriously?  (I will disagree, but that doesn't mean I
> am not grateful for your post.  Quite the contary.)  It in general
> violates associativity too, but the example above couldn't be referring
> to associativity because the second sum above could not be obtained from
> associativity alone.  Commutativity is required, applied to five pairs
> of numbers.  How can I go from
> 
>   7.23 + 8.41 + 6.15 + 2.31 + 7.73 + 7.77
> 
> to 
> 
>   8.41 + 6.15 + 2.31 + 7.73 + 7.77 + 7.23?

Simple:

>>> 7.23 + 8.41 + 6.15 + 2.31 + 7.73 + 7.77
39.594
>>> 7.23 + (8.41 + 6.15 + 2.31 + 7.73 + 7.77)
39.61

Due to commutativity, this is the same as

>>> (8.41 + 6.15 + 2.31 + 7.73 + 7.77) + 7.23
39.61

So commutativity is preserved but associativity is lost. (Of course a
single example doesn't prove that this is always the case, but it can be
seen from the guarantees that IEEE-754 arithmetic gives you that this is
actually the case).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-05 Thread Peter J. Holzer
On 2021-09-05 03:38:55 +1200, Greg Ewing wrote:
> If 7.23 were exactly representable, you would have got
> 723/1000.
> 
> Contrast this with something that *is* exactly representable:
> 
> >>> 7.875.as_integer_ratio()
> (63, 8)
> 
> and observe that 7875/1000 == 63/8:
> 
> >>> from fractions import Fraction
> >>> Fraction(7875,1000)
> Fraction(63, 8)
> 
> In general, to find out whether a decimal number is exactly
> representable in binary, represent it as a ratio of integers
> where the denominator is a power of 10, reduce that to lowest
> terms,

... and check if the denominator is a power of two. If it isn't (e.g.
1000 == 2**3 * 5**3) then the number is not exactly representable as a
binary floating point number.

More generally, if the prime factorization of the denominator only
contains prime factors which are also prime factors of your base, then
the number can be exactle represented (unless either the denominator or
the enumerator get too big). So, for base 10 (2*5), all numbers which
have only powers of 2 and 5 in the denominator (e.g 1/10 == 1/(2*5),
1/8192 == 1/2**13, 1/1024000 == 1/(2**13 * 5**3)) can represented
exactly, but those with other prime factors (e.g. 1/3, 1/7,
1/24576 == 1/(2**13 * 3), 1/1024001 == 1/(11 * 127 * 733)) cannot.
Similarly, for base 12 (2*2*3) numbers with 2 and 3 in the denominator
can be represented and for base 60 (2*2*3*5), numbers with 2, 3 and 5.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-05 Thread Peter J. Holzer
On 2021-09-04 10:01:23 -0400, Richard Damon wrote:
> On 9/4/21 9:40 AM, Hope Rouselle wrote:
> > Hm, I think I see what you're saying.  You're saying multiplication and
> > division in IEEE 754 is perfectly safe --- so long as the numbers you
> > start with are accurately representable in IEEE 754 and assuming no
> > overflow or underflow would occur.  (Addition and subtraction are not
> > safe.)
> > 
> 
> Addition and Subtraction are just as safe, as long as you stay within
> the precision limits.

That depends a lot on what you call "safe", 

a * b / a will always be very close to b (unless there's an over- or
underflow), but a + b - a can be quite different from b.

In general when analyzing a numerical algorithm you have to pay a lot
more attention to addition and subtraction than to multiplication and
division.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on floating-point numbers

2021-09-05 Thread Richard Damon


> On Sep 5, 2021, at 6:22 PM, Peter J. Holzer  wrote:
> 
> On 2021-09-04 10:01:23 -0400, Richard Damon wrote:
>>> On 9/4/21 9:40 AM, Hope Rouselle wrote:
>>> Hm, I think I see what you're saying.  You're saying multiplication and
>>> division in IEEE 754 is perfectly safe --- so long as the numbers you
>>> start with are accurately representable in IEEE 754 and assuming no
>>> overflow or underflow would occur.  (Addition and subtraction are not
>>> safe.)
>>> 
>> 
>> Addition and Subtraction are just as safe, as long as you stay within
>> the precision limits.
> 
> That depends a lot on what you call "safe", 
> 
> a * b / a will always be very close to b (unless there's an over- or
> underflow), but a + b - a can be quite different from b.
> 
> In general when analyzing a numerical algorithm you have to pay a lot
> more attention to addition and subtraction than to multiplication and
> division.
> 
>hp
> 
> -- 
Yes, it depends on your definition of safe. If ‘close’ is good enough then 
multiplication is probably safer as the problems are in more extreme cases. If 
EXACT is the question, addition tends to be better. To have any chance, the 
numbers need to be somewhat low ‘precision’, which means the need to avoid 
arbitrary decimals. Once past that, as long as the numbers are of roughly the 
same magnitude, and are the sort of numbers you are apt to just write, you can 
tend to add a lot of them before you get enough bits to accumulate to have a 
problem. With multiplication, every multiply roughly adds the number of bits of 
precision, so you quickly run out, and one divide will have a chance to just 
end the process.

Remember, the question came up because the sum was’t associative because of 
fractional bits. That points to thinking of exact operations, and addition does 
better at that.
-- 
https://mail.python.org/mailman/listinfo/python-list