On May 2, 11:08 pm, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote:
> And AFAIK the general overhead of laziness versus eager evaluation does > not pay off - haskell is a tad slower than e.g. an ML dialect AFAIK. In the numerical Python community there is already a prototype compiler called 'numexpr' which can provide efficient evaluation of expressions like y = a*b + c*d. But as long as there is no way of overloading an assignment, it cannot be seamlessly integrated in an array framework. One will e.g. have to type up Python expressions as strings and calling eval() on the string instead of working directly with Python expressions. In numerical work we all know how Fortran compares with C++. Fortran knows about arrays and can generate efficient code. C++ doesn't and have to resort to temporaries returned from overloaded operators. The only case where C++ can compare to Fortran is libraries like Blitz++, where for small fixes-sized arrays the temporary objects and loops can be removed using template meta-programming and optimizing compilers. NumPy has to generate a lot of temporary arrays and traverse memory more than necessary. This is a tremendous slow down when arrays are too large to fit in the CPU cache. Numexpr deals with this, but Python cannot integrate it seamlessly. I think it is really a matter of what you are trying to do. Some times lazy evaluation pays off, some times it doesn't. But overloaded assignment operators have more use than lazy evaluation. It can be used and abused in numerous ways. For example one can have classes where every assignment results in the creation of a copy, which may seem to totally change the semantics of Python code (except that it doesn't, it's just an illusion). Sturla Molden -- http://mail.python.org/mailman/listinfo/python-list