On 4 Jul, 16:47, "bart.c" <ba...@freeuk.com> wrote: > I suspect also the Lua JIT compiler optimises some of the dynamicism out of > the language (where it can see, for example, that something is always going > to be a number, and Lua only has one numeric type with a fixed range), so > that must be a big help.
Python could do the same, replace int and float with a "long double". It is 80 bit and has a 64 bit mantissa. So it can in theory do the job of all floating point types and integers up to 64 bit (signed and unsigned). A long double can 'duck type' all the floating point and integer types we use. There is really no need for more than one number type. For an interpreted language, it's just a speed killer. Other number types belong in e.g. the ctypes, array, struct and NumPy modules. Speed wise a long double (80 bit) is the native floating point type on x86 FPUs. There is no penalty memory-wise either, wrapping an int as PyObject takes more space. For a dynamic language it can be quite clever to just have one 'native' number type, observing that the mantissa of a floating point number is an unsigned integer. That takes a lot of the dynamicity out of the equation. Maybe you like to have integers and floating point types in the 'language'. But that does not mean it should be two types in the 'implementation' (i.e. internally in the VM). The implementation could duck type both with a suffciently long floating point type, and the user would not notice in the syntax. MATLAB does the same as Lua. Native number types are always double, you have to explicitly create the other. Previously they did not even exist. Scientists have been doing numerical maths with MATLAB for decades. MATLAB never prevented me from working with integers mathematically, even if I only worked with double. If I did not know, I would not have noticed. a = 1; % a is a double a = 1 + 1; % a is a double and exactly 2 a = int32(1); Sturla -- http://mail.python.org/mailman/listinfo/python-list