An additional question -- did you re-compile / re-link your
application with Open MPI? If you're running an MPI application
compiled / linked against another MPI implementation, it may not see
the Open MPI-specific startup information about how to startup
parallel processes (e.g., their rank in MPI_COMM_WORLD, etc.), and
therefore assume that each process is its own singleton
MPI_COMM_WORLD (i.e., 4 different COMM_WORLDs, each with a single
process: rank 0).
On Oct 31, 2006, at 2:51 PM, Ralph H Castain wrote:
Just out of curiosity – what environment (i.e., allocator and
launcher) are you running in? POE?
I’m not sure the POE support is all that good, which is why I ask.
Ralph
On 10/31/06 12:37 PM, "Nader Ahmadi" <a_na...@hotmail.com> wrote:
Hello,
I am a new OpenMPI user. We are planing to move from IBM AIX POE
to OpenMPI.
I had noproblem to install, configure, and compile my application,
using OpenMPI 1.1.2.
(thank you, for making it so easy).
"ompi_inf -all" runs fine (see attached ompi_info.txt file), my
application runs with no problem,
except it only create rank 0. for example if I
>> mpirun -np 4 my-prog arg1 arg2
I expect mpirun start 4 processes on local host with ranks 0,1,2,
and 3.
I see 4 processes started with rank 0 (see attahed mpirun_log.txt
file).
The behavior is the same, regardless of using local host, or using
-host file or -app file.
In all cases correct number of the processes starts on local or
remote nodes as specified, with rank 0 for all process.
Note: I have no problem running this application on IBM AIX using
POE.
I know this must be new user's problem.
Any comments
Thanks
Nader,
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems