Jeff Squyres wrote:
On May 24, 2011, at 4:42 AM, francoise.r...@obs.ujf-grenoble.fr wrote:
CALL MPI_COMM_SIZE(id%COMM, id%NPROCS, IERR )
IF ( id%PAR .eq. 0 ) THEN
IF ( id%MYID .eq. MASTER ) THEN
color = MPI_UNDEFINED
ELSE
color = 0
END IF
CALL MPI_COMM_SPLIT( id%COMM, color, 0, id%COMM_NODES, IERR )
id%NSLAVES = id%NPROCS - 1
ELSE
CALL MPI_COMM_DUP( id%COMM, id%COMM_NODES, IERR )
id%NSLAVES = id%NPROCS
END IF
IF (id%PAR .ne. 0 .or. id%MYID .NE. MASTER) THEN
CALL MPI_COMM_DUP( id%COMM_NODES, id%COMM_LOAD, IERR
ENDIF
Actually, we look at the first case, that is id%par = 0. But the MPI_COMM_SPLIT routine is called
by all the processes and creates a new communicator named "id%COMM_NODES". This
communicator contains all the slaves, but not the master. The first MPI_COMM_DUP is not executed,
the second one is executed on all the slaves nodes (id%MYID .NE. MASTER ), because the communicator
is "id%COMM_NODES" and so implies all the processes of this communicator.
Hmm.
Are you sure that id%myid is relative to id%comm? I don't see its assignment
in your code snipit.
Yes, id%myid is relative to id%comm. It is assigned, just before in the
code, by all the processes, by the following call :
CALL MPI_COMM_RANK(id%COMM, id%MYID, IERR)