On 9 Des, 22:14, "Jack" <[EMAIL PROTECTED]> wrote: > I understand that the standard Python distribution is considered > the C-Python. Howerver, the current C-Python is really a combination > of C and Python implementation. There are about 2000 Python files > included in the Windows version of Python distribution. I'm not sure > how much of the C-Python is implemented in C but I think the more > modules implemented in C, the better performance and lower memory > footprint it will get.
Donald Knuth, one of the fathers of modern computer science, is famous for stating that "premature optimization is the root of all evil in computer science." A typical computer program tends to have bottlenecks that accounts for more than 90% of the elapsed run time. Directing your optimizations anywhere else is futile. Writing a program in C will not improve the speed of your hardware. If the bottleneck is a harddisk or a network connection, using C will not change that. Disk i/o is a typical example of that. It is not the language that determines the speed by which Python or C can read from a disk. It is the disk itself. I had a data vizualization program that was slowed down by the need to move hundreds of megabytes of vertex data to video RAM. It would obviously not help to make the handful of OpenGL calls from C instead of Python. The problem was the amount of data and the speed of the hardware (ram or bus). The fact that I used Python instead of C actually helped to make the problem easier to solve. We have seen several examples that 'dynamic' and 'interpreted' languages can be quite efficient: There is an implementation of Common Lisp - CMUCL - that can compete with Fortran in efficiency for numerical computing. There are also versions of Lisp than can compete with the latest versions of JIT-compiled Java, e.g. SBCL and Allegro. As it happens, SBCL and CMUCL is mostly implemented in Lisp. The issue of speed for a language like Python has a lot to do with the quality of the implementation. What really makes CMUCL shine is the compiler that emits efficient native code on the fly. If it is possible to make a very fast Lisp, it should be possible to make a very fast Python as well. I remember people complaining 10 years ago that 'Lisp is so slow'. A huge effort has been put into making Lisp efficient enough for AI. I hope Python some day will gain a little from that effort as well. We have a Python library that allows us to perform a wide range of numerical tasks at 'native speed': NumPy (http://www.scipy.org). How such array libraries can be used to get excellent speedups is explained here: http://home.online.no/~pjacklam/matlab/doc/mtt/index.html We obviously need more effort to make Python more efficient for CPU bound tasks. Particularly JIT compilation like Java, compilation like Lisp or data specialization like Psyco. But writing larger parts of the standard library in C is not a solution. -- http://mail.python.org/mailman/listinfo/python-list