Thank you very much for the examples.

I tried the following program, based on the guidance here and additional 
information I found through google:

-------------------------------
PROGRAM mpitest1
IMPLICIT none

CHARACTER*6 :: dir
CHARACTER*1 :: crank

! MPI parameters
INCLUDE 'mpif.h'
INTEGER :: status(MPI_STATUS_SIZE)
INTEGER :: child_comm, ierr, info, irank, isize, nkeys
INTEGER :: errorcode_array(1)

! MPI Initialisation
CALL MPI_INIT(ierr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD,irank,ierr) ! Gets my rank
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,isize,ierr) ! Gets the com size
CALL MPI_INFO_CREATE(info, ierr) ! Prepare MPI INFO field

! Prepare a directory for each process based on the process rank (ie, test-0, 
test-1, test-2...)
WRITE(crank,'(I1)') irank
dir="test-" // crank
CALL SYSTEM("mkdir " // dir)

CALL SYSTEM("cp mpitest-2.ex " // dir // "/") ! Copy mpitest-2 to the directory 
for each processor (checked - this works fine)

CALL MPI_INFO_SET(info, "wdir", "test-0", ierr) ! Set the working directory for 
the external simulation (to keep it simple, we just use the directory created 
by process 0, which is called test-0

CALL MPI_Info_get_nkeys(info,nkeys,ierr)

print *, nkeys

CALL 
MPI_COMM_SPAWN("./mpitest-2.ex",MPI_ARGV_NULL,1,info,0,MPI_COMM_SELF,child_comm,errorcode_array,ierr)

CALL MPI_FINALIZE(ierr)

END PROGRAM mpitest1
-------------------------------


This gets so far as printing nkeys, which comes out correctly as "1". It then 
crashes on the spawn command though:

------------------------------
Fatal error in MPI_Comm_spawn: Other MPI error, error stack:
MPI_Comm_spawn(130).........: MPI_Comm_spawn(cmd="./mpitest-2.ex", 
argv=0x75e920, maxprocs=1, info=0x9c000000, root=0, MPI_COMM_SELF, 
intercomm=0x7fffe29510e4, errors=0x6b20d0) failed
MPID_Comm_spawn_multiple(71): Function MPID_Comm_spawn_multiple not 
implementedFatal error in MPI_Comm_spawn: Other MPI error, error stack:
MPI_Comm_spawn(130).........: MPI_Comm_spawn(cmd="./mpitest-2.ex", 
argv=0xd44920, maxprocs=1, info=0x9c000000, root=0, MPI_COMM_SELF, 
intercomm=0x7fffb38d08e4, errors=0x6b20d0) failed
MPID_Comm_spawn_multiple(71): Function MPID_Comm_spawn_multiple not 
implementedrank 1 in job 190041  december_56088   caused collective abort of 
all ranks
  exit status of rank 1: return code 1
------------------------------

It says something about MPID_Comm_spawn_multiple not being implemented,
but I don't know what this is. I'm only spawning one instance, not
multiple instances. The parameters "MPI_COMM_SELF" and "child_comm"
I've implemented blindly, based on what I've googled so far; I don't
really understand this and I can't find information about it.

I should also say that I'm back in the testing environment for now (ie, 
mvapich2) although it will eventually be run in open-mpi (or optionally, 
mvapich1, but not mvapich2).

As ever, I'm very grateful for all the help so far, and am hoping this can 
eventually be solved, step-by-step.

From: r...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Sun, 7 Mar 2010 01:00:40 -0700
To: us...@open-mpi.org
Subject: Re: [OMPI users] running externalprogram       on      same    
processor       (Fortran)



Attached is are some simple examples (in C) that collectively does most of what 
you are trying to do.
You have some args wrong in your call. See slave_spawn.c for how to use 
info_keys.
HTHRalph
                                          
_________________________________________________________________
Got a cool Hotmail story? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

Reply via email to