On 2018-08-28 15:11, Frank Millman wrote:
Hi all

I know about this gotcha -

x = 1.1 + 2.2
x
3.3000000000000003

According to the docs, the reason is that "numbers like 1.1 and 2.2 do not
have exact representations in binary floating point."

So when I do this -

y = 3.3
y
3.3

what exactly is happening? What is 'y' at this point?

Or if I do this -

z = (1.1 + 2.2) * 10 / 10
z
3.3

What makes it different from the first example?

There's a bit of rounding going on in the last few digits. For example:

>>> 1.1
1.1
>>> _ - 1
0.10000000000000009
>>> 0.1
0.1
>>> _ * 10 - 1
0.0
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to