On Apr 17, 7:37 pm, Paul Rubin <http://[EMAIL PROTECTED]> wrote:
> Therefore the likelihood of a C or asm program > being 20x faster including disk i/o is dim. But realistically, > counting just CPU time, you might get a 20x speedup with assembler if > you're really determined, using x86 SSE (128-bit vector) instructions, > cache prefetching, etc. I think that the attitude that is prevalent is that although Python and other "interpreted" languages are slow, there are places where that slowness is not important and using them is OK. The point that I am trying to make with my example is that often, by leveraging the care and optimization work that others have put into the Python standard library over the years, a rather casual programmer can match or better the performance of 'average' C code. i.e. C code that was written by a good C programmer while no one was looking over his shoulder, and no pressures motivated him to spend a lot of time optimizing the C to the hilt, or switching to asm. With the reading and writing of the data (which actually works out to about 23MB, marshalled) now down to 1 second each, I'm content. In the beginning, the io time was overshadowing the sort time by a good bit. And it was the sort time that I wanted to highlight. BTW, no one has responded to my challenge to best the original sample Python code with C, C++, or asm. -- http://mail.python.org/mailman/listinfo/python-list