Paul Rubin wrote: > "Diez B. Roggisch" <[EMAIL PROTECTED]> writes: > >>AFAIK some LISPs do a similar trick to carry int values on >>cons-cells. And by this tehy reduce integer precision to 28 bit or >>something. Surely _not_ going to pass a regression test suite :) > > > Lisps often use just one tag bit, to distinguish between an immediate > object and a heap object. With int/long unification, Python shouldn't > be able to tell the difference between an immediate int and a heap int.
That particular implementation used 3 or 4 tag-bits. Of course you are right that nowadays python won't notice the difference, as larger nums get implicitely converted to a suitable representation. But then the efficiency goes away... Basically I think that trying to come up with all sorts of optimizations for rather marginal problems (number crunching should be - if a python domain at all - done using Numarray) simply distracts and complicates the code-base. Speeding up dictionary lookups OTOH would have a tremendous impact (and if I'm n ot mistaken was one of the reasons for the 30% speed increase between 2.2 and 2.3) DIEZ -- http://mail.python.org/mailman/listinfo/python-list