> >> And if you really need the efficiency of "well-tuned raw C", it's one > >> function > >> call away in your Cython code. > > > What do you mean by that? > > > I know nothing about how Cython compares to C in performance, so I said > > "well-tuned" because it must be possible to write C that is faster than > > Cython, though it may take some effort. > > So, you write the hand-optimised function in plain C, declare it in Cython and > call it. That's what I meant. Since Cython compiles to C code, linking against > a C module is straight forward. And this still keeps you from having to write > all the Python API glue code in plain C.
Python was originally intended to just link C modules, right? (ITTRW if that's the right word?) What are Python's roots? What are its principles? What are its fundamentals? (And what does Python 8! look like!?) We can even get multi-threading running outside of the Global Interpreter Lock, if the thread only acquires it to access shared object code... make that mananged objects. One big decision is if you have a separate Python interpreter running on every remote location, versus say as just a relay mechanism. C on the rest. CMIIW, correct me if I'm wrong, but the productivity bottlenecks vs. performance bottlenecks make a trade-off. For my part, sometimes I get __getattr__ bottlenecks, and other times object instantiation bottlenecks. There was an iterator one in there too. But no I didn't have competing C code to test to compare. If I understand OP correctly, C libraries will be running at all locations, which means they'll need separate compilations. In fact, come to think of it, I'm having trouble with the concept of cross-platform distributed. Do OP mean that lots of programmers are sharing data? Will all the platforms be performing all of the tasks? Or do they specialize? (Not necessarily / no / yes ok.) Lastly, if you can get a boost by buying more servers, then there's a resource bottleneck breakpoint to consider too. -- http://mail.python.org/mailman/listinfo/python-list