Hello,
    I'm just compiled open-mpi and tried to run my code which just
measures bandwidth from one node to another.   (Code compile fine and
runs under other mpi implementations)

When I did I got this.  

uahrcw@c275-6:~/mpi-benchmarks> cat openmpitcp.o15380
c317-6
c317-5
[c317-5:24979] [0,0,2]-[0,0,0] mca_oob_tcp_peer_complete_connect:
connection failed (errno=110) - retrying (pid=24979)
[c317-5:24979] mca_oob_tcp_peer_timer_handler
[c317-5:24997] [0,1,1]-[0,0,0] mca_oob_tcp_peer_complete_connect:
connection failed (errno=110) - retrying (pid=24997)
[c317-5:24997] mca_oob_tcp_peer_timer_handler

[0,1,1][btl_tcp_endpoint.c:559:mca_btl_tcp_endpoint_complete_connect]
connect() failed with errno=110


I compiled open-mpi with Pbspro 5.4-4 and I'm guessing that has
something to do with it.

I've attached my config.log

Any help with this would be appreciated.

uahrcw@c275-6:~/mpi-benchmarks> ompi_info
                Open MPI: 1.0.1r8453
   Open MPI SVN revision: r8453
                Open RTE: 1.0.1r8453
   Open RTE SVN revision: r8453
                    OPAL: 1.0.1r8453
       OPAL SVN revision: r8453
                  Prefix: /opt/asn/apps/openmpi-1.0.1
 Configured architecture: x86_64-unknown-linux-gnu
           Configured by: asnrcw
           Configured on: Fri Feb 24 15:19:37 CST 2006
          Configure host: c275-6
                Built by: asnrcw
                Built on: Fri Feb 24 15:40:09 CST 2006
              Built host: c275-6
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: no
              C compiler: gcc
     C compiler absolute: /usr/bin/gcc
            C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
      Fortran77 compiler: g77
  Fortran77 compiler abs: /usr/bin/g77
      Fortran90 compiler: ifort
  Fortran90 compiler abs: /opt/asn/intel/fce/9.0/bin/ifort
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: no
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: 1
              MCA memory: malloc_hooks (MCA v1.0, API v1.0, Component
v1.0.1)
           MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.0.1)
           MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.0.1)
           MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.0.1)
               MCA timer: linux (MCA v1.0, API v1.0, Component v1.0.1)
           MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
           MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
                MCA coll: basic (MCA v1.0, API v1.0, Component v1.0.1)
                MCA coll: self (MCA v1.0, API v1.0, Component v1.0.1)
                MCA coll: sm (MCA v1.0, API v1.0, Component v1.0.1)
                  MCA io: romio (MCA v1.0, API v1.0, Component v1.0.1)
               MCA mpool: sm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA pml: teg (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA ptl: self (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA ptl: sm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA btl: self (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA btl: sm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA topo: unity (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA gpr: null (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA gpr: replica (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA iof: proxy (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA iof: svc (MCA v1.0, API v1.0, Component v1.0.1)
                  MCA ns: proxy (MCA v1.0, API v1.0, Component v1.0.1)
                  MCA ns: replica (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
                 MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA ras: localhost (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA ras: slurm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA ras: tm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA rds: resfile (MCA v1.0, API v1.0, Component v1.0.1)
               MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.0.1)
                MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.0.1)
                MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA rml: oob (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA pls: daemon (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA pls: proxy (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA pls: fork (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA pls: rsh (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA pls: slurm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA pls: tm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA sds: env (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA sds: seed (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA sds: singleton (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA sds: slurm (MCA v1.0, API v1.0, Component v1.0.1)
                 MCA sds: pipe (MCA v1.0, API v1.0, Component v1.0.1)
uahrcw@c275-6:~/mpi-benchmarks>


-- 
Charles Wright, HPC Systems Administrator
Alabama Research and Education Network
Computer Sciences Corporation 

Attachment: config.log.bz2
Description: BZip2 compressed data

Reply via email to