Great. That suggests mpirun is OMPI, as you suggested it was, and that the different process ranks are being set up correctly by mpirun.

But can you also confirm that you're using the OMPI mpicc or mpif90? Invoke the mpicc with a full path name, perhaps -- that is, /usr/users/pankatz/OPENmpi/bin/mpicc -- or stick a

#ifdef OPEN_MPI
   printf("using Open MPI\n");
#else
   printf("not using Open MPI\n");
#endif

into your source code.

Pankatz, Klaus wrote:

Allright, I've ran a mpirun -np 4 env. And I see OMPI_COMM_WORLD_RANK 0 to 3. So far so good.
OMPI_COMM_WORLD_SIZE=4 everytime, I think thats correct.
OMPI_MCA_mpi_yield_when_idle=0 everytime zero
OMPI_MCA_orte_app_num=0 everytime zero
Am 23.04.2010 um 14:54 schrieb Terry Dontje:

Ok can you do an "mpirun -np 4 env" you should seeOMPI_COMM_WORLD_RANK range 0 thru 3. I am curious if you even see OMPI_* env-vars and if you do is this one 0 for all procs?

Pankatz, Klaus wrote:

Yeah, I sure that I use the right mpirun.
which mpirun leads to /usr/users/pankatz/OPENmpi/bin/mpirun which is the right 
one.
________________________________________
Von: users-boun...@open-mpi.org [users-boun...@open-mpi.org] im Auftrag von 
Terry Dontje [terry.don...@oracle.com]
Gesendet: Freitag, 23. April 2010 14:29
An: Open MPI Users
Betreff: Re: [OMPI users] mpirun -np 4 hello_world; on a eight processor shared 
memory machine produces wrong output

This looks like you are using an mpirun or mpiexec from mvapich to run an 
executable compiled with OMPI.  Can you make sure that you are using the right 
mpirun?

--td

Pankatz, Klaus wrote:

Yes, I did that.

It ist basically the same problem with a Fortran version of this little 
program. With that I used the mpif90 command of openMPI.
________________________________________
Von: users-boun...@open-mpi.org<mailto:users-boun...@open-mpi.org> 
[users-boun...@open-mpi.org<mailto:users-boun...@open-mpi.org>] im Auftrag von Reuti 
[re...@staff.uni-marburg.de<mailto:re...@staff.uni-marburg.de>]
Gesendet: Freitag, 23. April 2010 14:15
An: Open MPI Users
Betreff: Re: [OMPI users] mpirun -np 4 hello_world;     on a eight processor 
shared memory machine produces wrong output

Hi,

Am 23.04.2010 um 14:06 schrieb Pankatz, Klaus:



Hi all,

there's a problem with openMPI on my machine. When I simply try to run this 
little hello_world-program on multiple processors, the output isn't as expected:
*****
C code:
#include <mpi.h>
#include <stdio.h>
#include <unistd.h>
int main(int argc, char **argv)
{
int size,rank;
char hostname[50];
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank); //Who am I?
MPI_Comm_size(MPI_COMM_WORLD, &size); //How many processes?
gethostname (hostname, 50);
printf ("Hello World! I'm number %2d of %2d running on host %s\n",
rank, size, hostname);
MPI_Finalize();
return 0;
}
****

Command: mpirun -np 4 a.out



the mpirun (better, use: mpiexec) is the one from the Open MPI, and you also 
used its version mpicc to compile the program?

-- Reuti




Output:
Hello World! I'm number  0 of  1 running on host marvin
Hello World! I'm number  0 of  1 running on host marvin
Hello World! I'm number  0 of  1 running on host marvin
Hello World! I'm number  0 of  1 running on host marvin

It should be more or less:
Hello World! I'm number  1 of  4 running on host marvin
Hello World! I'm number  2 of  4 running on host marvin
....

OpenMPI-version 1.4.1 compiled with Lahey Fortran 95 (lf95).
OpenMPI was compiled "out of the box" only changing to the Lahey compiler with 
a setenv $FC lf95

The System: Linux marvin 2.6.27.6-1 #1 SMP Sat Nov 15 20:19:04 CET 2008 x86_64 
GNU/Linux

Compiler: Lahey/Fujitsu Linux64 Fortran Compiler Release L8.10a

Reply via email to