MPI is more comparable to Mathematica's underlying MathLink C libraries. We 
also have an mpi4py optional spkg that maps the C to a Python API, 
essentially one-to-one. Its not pretty, but if you know MPI then you can 
immediately use it. There is a mini-introduction and a sample program at 
http://www.sagemath.org/doc/numerical_sage/parallel_computation.html

The @parallel decorator could be extended to optionally use MPI as parallel 
processing backend. Right now @parallel(p_iter) supports 'fork', 
'multiprocessing' (threads) and 'reference' (serial execution). It would 
then pickle the function and its arguments, MPI broadcast them to the 
cluster, execute them as a bag-of-tasks, and finally receive the answer from 
the nodes. The only constraint would be that you can't access global 
variables on the compute nodes, obviously. As far as I understand it from 
the link you posted, this is essentially what Mathematica's ParallelTools 
do.

But again, that only works for embarrassingly parallel workloads. If you 
need a lot of cluster communication then there is no way around learning 
MPI. For example, MPI allows you to
 * send/receive without blocking (a background thread deals with the cluster 
communication); 
 * reductions (e.g. each node has a number and you want to add them up) work 
in O(log(n)) and not O(n)
 * support for fast network hardware (Cray's SeaStar interconnect has more 
bandwidth than AMD's hypertransport between CPU and the local RAM!)

Volker

-- 
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to 
sage-support+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org

Reply via email to