I tried searching the archive for an answer and couldn't locate one, so I apologize in advance
if this has been a redundant question.
I am trying to use OpenMPI on a cluster with OpenIB using SLURM as the resource manager.
The program is just a simple hello world:
#include <stdio.h>
#include "mpi.h"

int main(argc,argv)
int argc;
char *argv[];
{
  int myid, numprocs;

  MPI_Init(&argc,&argv);
  MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
  MPI_Comm_rank(MPI_COMM_WORLD,&myid);

  printf ("%d of %d: Hello world!\n", myid, numprocs);

  MPI_Finalize();
  return 0;
}

When I run it with MVAPICH I get:

odev2@mhaskell:  srun -N4 -n4 --nodelist=odev[0-3,6] `pwd`/hello
1 of 4:  Hello world!
3 of 4:  Hello world!
2 of 4:  Hello world!
0 of 4:  Hello world!

When I run it with OpenMPI (openmpi-1.0.2a4), I get:

odev2@mhaskell:  srun -N4 -n4 --nodelist=odev[0-3,6] `pwd`/hello
0 of 1:  Hello world!
0 of 1:  Hello world!
0 of 1:  Hello world!
0 of 1:  Hello world!

I know, at least, they actually do run on the different nodes, but something is not working right.
Maybe I have some configuration problem in the way I built it?

Thanks for any help in getting this right.
Mike Haskell
mhask...@llnl.gov

Here is the ompi_info output:
odev2@mhaskell:ompi_info
              Open MPI: 1.0.2a4r8860
 Open MPI SVN revision: r8860
              Open RTE: 1.0.2a4r8860
 Open RTE SVN revision: r8860
                  OPAL: 1.0.2a4r8860
     OPAL SVN revision: r8860
                Prefix: /home/mhaskell/testdir/x86_64
Configured architecture: x86_64-unknown-linux-gnu
         Configured by: mhaskell
         Configured on: Fri Feb 10 16:08:48 PST 2006
        Configure host: odev2
              Built by: mhaskell
              Built on: Fri Feb 10 16:28:17 PST 2006
            Built host: odev2
            C bindings: yes
          C++ bindings: yes
    Fortran77 bindings: yes (all)
    Fortran90 bindings: no
            C compiler: gcc
   C compiler absolute: /usr/bin/gcc
          C++ compiler: g++
 C++ compiler absolute: /usr/bin/g++
    Fortran77 compiler: g77
Fortran77 compiler abs: /usr/bin/g77
    Fortran90 compiler: none
Fortran90 compiler abs: none
           C profiling: yes
         C++ profiling: yes
   Fortran77 profiling: yes
   Fortran90 profiling: no
        C++ exceptions: no
        Thread support: posix (mpi: no, progress: no)
Internal debug support: no
   MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
       libltdl support: 1
MCA memory: malloc_hooks (MCA v1.0, API v1.0, Component v1.0.2)
         MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.0.2)
         MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.0.2)
         MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.0.2)
             MCA timer: linux (MCA v1.0, API v1.0, Component v1.0.2)
         MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
         MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
              MCA coll: basic (MCA v1.0, API v1.0, Component v1.0.2)
              MCA coll: self (MCA v1.0, API v1.0, Component v1.0.2)
              MCA coll: sm (MCA v1.0, API v1.0, Component v1.0.2)
                MCA io: romio (MCA v1.0, API v1.0, Component v1.0.2)
             MCA mpool: openib (MCA v1.0, API v1.0, Component v1.0.2)
             MCA mpool: sm (MCA v1.0, API v1.0, Component v1.0.2)
               MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.0.2)
               MCA pml: teg (MCA v1.0, API v1.0, Component v1.0.2)
               MCA ptl: self (MCA v1.0, API v1.0, Component v1.0.2)
               MCA ptl: sm (MCA v1.0, API v1.0, Component v1.0.2)
               MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.0.2)
               MCA btl: openib (MCA v1.0, API v1.0, Component v1.0.2)
               MCA btl: self (MCA v1.0, API v1.0, Component v1.0.2)
               MCA btl: sm (MCA v1.0, API v1.0, Component v1.0.2)
               MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
              MCA topo: unity (MCA v1.0, API v1.0, Component v1.0.2)
               MCA gpr: null (MCA v1.0, API v1.0, Component v1.0.2)
               MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.0.2)
               MCA gpr: replica (MCA v1.0, API v1.0, Component v1.0.2)
               MCA iof: proxy (MCA v1.0, API v1.0, Component v1.0.2)
               MCA iof: svc (MCA v1.0, API v1.0, Component v1.0.2)
                MCA ns: proxy (MCA v1.0, API v1.0, Component v1.0.2)
                MCA ns: replica (MCA v1.0, API v1.0, Component v1.0.2)
               MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
               MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.0.2)
               MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.0.2)
               MCA ras: localhost (MCA v1.0, API v1.0, Component v1.0.2)
               MCA ras: slurm (MCA v1.0, API v1.0, Component v1.0.2)
               MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.0.2)
               MCA rds: resfile (MCA v1.0, API v1.0, Component v1.0.2)
             MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.0.2)
              MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.0.2)
              MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.0.2)
               MCA rml: oob (MCA v1.0, API v1.0, Component v1.0.2)
               MCA pls: daemon (MCA v1.0, API v1.0, Component v1.0.2)
               MCA pls: fork (MCA v1.0, API v1.0, Component v1.0.2)
               MCA pls: proxy (MCA v1.0, API v1.0, Component v1.0.2)
               MCA pls: rsh (MCA v1.0, API v1.0, Component v1.0.2)
               MCA pls: slurm (MCA v1.0, API v1.0, Component v1.0.2)
               MCA sds: env (MCA v1.0, API v1.0, Component v1.0.2)
               MCA sds: pipe (MCA v1.0, API v1.0, Component v1.0.2)
               MCA sds: seed (MCA v1.0, API v1.0, Component v1.0.2)
               MCA sds: singleton (MCA v1.0, API v1.0, Component v1.0.2)
               MCA sds: slurm (MCA v1.0, API v1.0, Component v1.0.2)
odev2@mhaskell:


Attachment: config.log.gz
Description: GNU Zip compressed data

Reply via email to