Hi,

We are using OpenMPI 1.1.1 and need Multiple Thread support.  OpenMPI is
configured with these options (the /opt mount point is on the head node
and is visible to all nodes):
--prefix=/opt/openmpi
--with-tm=/opt/torque
--with-devel-headers
--with-threads=posix
--enable-mpi-threads
--enable-progress-threads

When I run a program (even a simple one) on more than just the head
node, it either 1) hangs on the MPI_Init and does nothing or 2) outputs
the error message "[*:29246] mca_btl_sm_component_init: mkfifo failed
with errno=17" (where * is the machine name).  Everything builds fine
(using mpic++) and runs fine (minus multithread support) if I configure
and build OpenMPI without multithread support.

I've tried to search for help on this, but all I could find is one
message (for 1.0.2) asking for help with no replies
(http://www.open-mpi.org/community/lists/users/2006/06/1350.php).  Does
anyone have any idea what might be going on?  Or any suggestions as to
what to try?  I've included my ompi_info output just in case...

Thanks,
Matt


ompi_info Output
                Open MPI: 1.1.1
   Open MPI SVN revision: r11473
                Open RTE: 1.1.1
   Open RTE SVN revision: r11473
                    OPAL: 1.1.1
       OPAL SVN revision: r11473
                  Prefix: /opt/openmpi_nt
 Configured architecture: x86_64-unknown-linux-gnu
           Configured by: root
           Configured on: Mon Oct  9 11:26:09 EDT 2006
          Configure host: <host machine name>
                Built by: carnellr
                Built on: Mon Oct  9 11:34:31 EDT 2006
              Built host: <host machine name>
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: no
 Fortran90 bindings size: na
              C compiler: gcc
     C compiler absolute: /usr/lib64/ccache/bin/gcc
            C++ compiler: g++
   C++ compiler absolute: /usr/lib64/ccache/bin/g++
      Fortran77 compiler: gfortran
  Fortran77 compiler abs: /usr/bin/gfortran
      Fortran90 compiler: g77
  Fortran90 compiler abs: /usr/bin/g77
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: no
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
              MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component
v1.1.1)
           MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.1.1)
           MCA maffinity: first_use (MCA v1.0, API v1.0, Component
v1.1.1)
               MCA timer: linux (MCA v1.0, API v1.0, Component v1.1.1)
           MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
           MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
                MCA coll: basic (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: hierarch (MCA v1.0, API v1.0, Component
v1.1.1)
                MCA coll: self (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: sm (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: tuned (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA io: romio (MCA v1.0, API v1.0, Component v1.1.1)
               MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1.1)
              MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: self (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA topo: unity (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
                 MCA gpr: null (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA iof: svc (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA ns: replica (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
                 MCA ras: dash_host (MCA v1.0, API v1.0, Component
v1.1.1)
                 MCA ras: hostfile (MCA v1.0, API v1.0, Component
v1.1.1)
                 MCA ras: localhost (MCA v1.0, API v1.0, Component
v1.1.1)
                 MCA ras: slurm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA ras: tm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA rds: hostfile (MCA v1.0, API v1.0, Component
v1.1.1)
                 MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1.1)
               MCA rmaps: round_robin (MCA v1.0, API v1.0, Component
v1.1.1)
                MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA rml: oob (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: fork (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: slurm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: tm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: env (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: seed (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: singleton (MCA v1.0, API v1.0, Component
v1.1.1)
                 MCA sds: slurm (MCA v1.0, API v1.0, Component v1.1.1)

______________________________
Matt Cupp
Battelle Memorial Institute
Statistics and Information Analysis 

Reply via email to