Carl Friedrich Bolz wrote: > Rumors have it that the secret goal is being faster-than-C which is > nonsense, isn't it?
Maybe not. If one can call functions from a system dll (a la ctypes, some other poster already mentioned there was some investigation in this area) one can skip a layer of the hierarchy (remove the c-coded middleman!) and this would possibly result in faster code. I'm not involved in PyPy myself but this would seem a logical possibility. To go a step further, if the compiler somehow would know about the shortest machine code sequence which would produce the desired effect then there would be no reason to limit onself to only those relatively inefficent standard code sequences that are inside system dll's. Just design specific optimized dll's on the fly :-) (Now going into turbo overdive) One could have a central computer checking which data transformations (at a polymorfic level) a specific program is accomplishing and 'reengineer or restructure' the code inductively to check whether some other coder already had 'said the same thing' in 'better python code'. So one would get a warning when reinventing the wheel even if one had invented a square one :-) or if one had distributed functionality in an inefficent way. Next, after standardizing the input code this way one could have a list of these 'frequently used standard sequences' memoized at the central location in order to speed up the compilation phase. Of course the central interpreter would be sensitive to local code history so this would ease the code recognition process. This would work like the way human attention works in that we recognize the word 'wheel' sooner if we first saw a picture of a car. The only problem with this approach seems to be that it looks like a straight path to borghood ... Anton 'resistance is futile, all your codes are belong to us!' -- http://mail.python.org/mailman/listinfo/python-list