On Mon, 6 Dec 2004 10:30:06 -0500, Tim Peters <[EMAIL PROTECTED]> wrote:

>[Bengt Richter]
>> Peculiar boundary cases:
>> 
>> >>> 2.0**31-1.0
>> 2147483647.0
>> >>> int(2147483647.0)
>> 2147483647L
>> >>> int(2147483647L )
>> 2147483647
>> >>>
>> >>> -2.0**31
>> -2147483648.0
>> >>> int(-2147483648.0)
>> -2147483648L
>> >>> int(-2147483648L )
>> -2147483648
>> 
>> some kind of one-off error?
>
>It would help if you were explicit about what you think "the error"
>is.  I see a correct result in all cases there.
>
>Is it just that sometimes
>
>    int(a_float)
>
>returns a Python long when
>
>    int(a_long_with_the_same_value_as_that_float)
>
>returns a Python int?  If so, that's not a bug -- there's no promise
>anywhere, e.g., that Python will return an int whenever it's
>physically possible to do so.
Ok, I understand the expediency of that policy, but what is now the meaning
of int, in that case? Is it now just a vestigial artifact on the way to
transparent unification of int and long to a single integer type?

Promises or not, ISTM that if int->float succeeds in preserving all significant 
bits, then
then a following float->int should also succeed without converting to long.

>
>Python used to return a (short) int in all cases above, but that lead
>to problems on some oddball systems.  See the comments for float_int()
>in floatobject.c for more detail.  Slowing float_int() to avoid those
>problems while returning a short int whenever physically possible is a
>tradeoff I would oppose.

The 2.3.2 source snippet in floatobject.c :
--------------
static PyObject *
float_int(PyObject *v)
{
        double x = PyFloat_AsDouble(v);
        double wholepart;       /* integral portion of x, rounded toward 0 */

        (void)modf(x, &wholepart);
        /* Try to get out cheap if this fits in a Python int.  The attempt
         * to cast to long must be protected, as C doesn't define what
         * happens if the double is too big to fit in a long.  Some rare
         * systems raise an exception then (RISCOS was mentioned as one,
         * and someone using a non-default option on Sun also bumped into
         * that).  Note that checking for >= and <= LONG_{MIN,MAX} would
         * still be vulnerable:  if a long has more bits of precision than
         * a double, casting MIN/MAX to double may yield an approximation,
         * and if that's rounded up, then, e.g., wholepart=LONG_MAX+1 would
         * yield true from the C expression wholepart<=LONG_MAX, despite
         * that wholepart is actually greater than LONG_MAX.
         */
        if (LONG_MIN < wholepart && wholepart < LONG_MAX) {
                const long aslong = (long)wholepart;
                return PyInt_FromLong(aslong);
        }
        return PyLong_FromDouble(wholepart);
}
--------------

But this is apparently accessed through a table of pointers, so would you oppose
an auto-configuration that one time tested whether
int(float(sys.maxint))==sys.maxint and int(float(-sys.maxint-1))==-sys.maxint-1
(assuming that's sufficient, of which I'm not 100% sure ;-) and if so switched
the pointer to a version that tested if(LONG_MIN <= wholepart && 
wholepart<=LONG_MAX)
instead of the safe-for-some-obscure-system version?

Of course, if int isn't all that meaningful any more, I guess the problem can 
be moved to the
ctypes module, if that gets included amongst the batteries ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to