>> In [21]: a = 10.0 > >> In [22]: b = 10.0 / 3.0 > >> In [24]: divmod(a, b)[0] >> Out[24]: 2.0 > >> In [25]: math.floor(a / b) - 1.0 >> Out[25]: 2.0 > > Wow. To me this stuff is just black magic, with a bit of voodoo > added for good measure... Maybe some day I'll understand it.
I think this example is not too difficult to understand (IIUC). I'll use integer constants to denote exact real numbers and exact real operations, and the decimal point to denote floating point numbers. IIUC, the source of the problem is that 10.0/3.0 > 10/3. 10/3 is not exactly representable, so it needs to be rounded up or rounded down; the closest representable value is larger than the exact value. Therefore, (10.0/3.0)*3 > 10. So 10.0/3.0 doesn't fit three times into 10.0, but only two times; the quotient is therefore 2.0. The remainder is really close to 10.0/3.0, though: py> divmod(a,b) (2.0, 3.333333333333333) py> divmod(a,b)[1]-b -4.4408920985006262e-16 So that explains why you get 2.0 as the quotient. Now, if you do math.floor(a / b), we first need to look at a/b. Again, 10.0/(10.0/3.0) is not exactly representable. Funnily, the closest representable value is 3.0, so the quotient gets rounded up again: py> a/b 3.0 math.floor doesn't change the value, so it stays at 3.0; qed. Regards, Martin -- http://mail.python.org/mailman/listinfo/python-list