Vincent Manis <vma...@telus.net> writes: > On 2009-11-11, at 14:31, Alain Ketterlin wrote: > I'm having some trouble understanding this thread. My comments aren't > directed at Terry's or Alain's comments, but at the thread overall. > > 1. The statement `Python is slow' doesn't make any sense to me. Python is a > programming language; it is implementations that have speed or lack thereof.
This is generally true, but Python *the language* is specified in a way that makes executing Python programs quickly very very difficult. I'm tempted to say it's impossible, but great strides have been made recently with JITs, so we'll see. > 2. A skilled programmer could build an implementation that compiled Python > code into Common Lisp or Scheme code, and then used a high-performance > Common Lisp compiler such as SBCL, or a high-performance Scheme compiler > such as Chez Scheme, to produce quite fast code ... A skilled programmer has done this for Common Lisp. The CLPython implementation converts Python souce code to Common Lisp code at read time, which is then is compiled. With SBCL you get native machine code for every Python expression. http://github.com/franzinc/cl-python/ http://common-lisp.net/project/clpython/ If you want to know why Python *the language* is slow, look at the Lisp code CLPython generates and at the code implementing the run time. Simple operations end up being very expensive. Does the object on the left side of a comparison implement compare? No, then does the right side implement it? No, then try something else .... I'm sure someone can come up with a faster Python implementation, but it will have to be very clever. > This whole approach would be a bad idea, because the compile times would be > dreadful, but I use this example as an existence proof that Python > implementations can generate reasonably efficient executable programs. The compile times are fine, not dreadful. Give it a try. > 3. It is certainly true that CPython doesn't scale up to environments where > there are a significant number of processors with shared memory. Even on one processor, CPython has problems. I last seriously used CPython to analyze OCRed books. The code read in the OCR results for one book at a time, which included the position of every word on every page. My books were long, 2000 pages, and dense and I was constantly fighting address space limitations and CPython slowness related to memory usage. I had to resort to packing and unpacking data into Python integers in order to fit all the OCR data into RAM. bob -- http://mail.python.org/mailman/listinfo/python-list