I'm using Rmpi (a pretty thin wrapper around MPI for R) on Debian Lenny
(amd64).  My set up has a central calculator and a bunch of slaves to
wich work is distributed.

The slaves wait like this:
        mpi.send(as.double(0), doubleType, root, requestCode, comm=comm)
        request <- request+1
        cases <- mpi.recv(cases, integerType, root, mpi.any.tag(),
comm=comm)

I.e., they do a simple send and then a receive.

It's possible there's no one to talk to, so it could be stuck at
mpi.send or mpi.recv.

Are either of those operations that should chew up CPU?  At this point,
I'm just trying to figure out where to look for the source of the
problem.

Running openmpi-bin 1.2.7~rc2-2

Ross

Reply via email to