Hi,

I have an application that uses the UnixODBC library
(http://www.unixodbc.org) and MPI. When trying to run a program linked with
UnixODBC, I immediately get an error, regardless of the calls in the
program, i.e. OpenMPI fails during MPI_Init, which is the first call in the
program.

I tried a simple experiment using the following program (trivial to
demonstrate the bug) : 


#include <mpi.h>
#include <iostream>

int main(int argc, char* argv[])
{
   std::cerr << "Initializing MPI" << std::endl;
   MPI_Init(&argc, &argv);
   std::cerr << "MPI Initialized" << std::endl;

   int rank;
   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
   std::cerr << "My rank is : " << rank << std::endl;

   std::cerr << "Shutting down MPI" << std::endl;
   MPI_Finalize();
}



If I compile this normally without UnixODBC, everything is fine:

[wsinno@cluster openmpi_bug]$ mpic++ main.cpp
[wsinno@cluster openmpi_bug]$ mpiexec -n 2 ./a.out
Initializing MPI
Initializing MPI
MPI Initialized
My rank is : 0
Shutting down MPI
MPI Initialized
My rank is : 1
Shutting down MPI



If I compile and link in UnixODBC, I get the following problem:

[wsinno@cluster openmpi_bug]$ mpic++ main.cpp -L UnixODBC/lib -lodbc
[wsinno@cluster openmpi_bug]$ mpiexec -n 2 ./a.out
Initializing MPI
[cluster.logicblox.local:02272] [NO-NAME] ORTE_ERROR_LOG: Not found in file
runtime/orte_init_stage1.c at line 214
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_sds_base_select failed
  --> Returned value -13 instead of ORTE_SUCCESS

--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: orte_init_stage1 failed
  --> Returned "Not found" (-13) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)



I have tried using iodbc (http://www.iodbc.org) instead, and that seems to
work fine. Attached are the config.log and ompi_info output.

Wael.

                Open MPI: 1.1.1
   Open MPI SVN revision: r11473
                Open RTE: 1.1.1
   Open RTE SVN revision: r11473
                    OPAL: 1.1.1
       OPAL SVN revision: r11473
                  Prefix: /opt/openmpi
 Configured architecture: i686-pc-linux-gnu
           Configured by: root
           Configured on: Fri Sep  1 17:21:07 EDT 2006
          Configure host: cluster.logicblox.local
                Built by: root
                Built on: Fri Sep  1 17:34:48 EDT 2006
              Built host: cluster.logicblox.local
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: yes
 Fortran90 bindings size: small
              C compiler: gcc
     C compiler absolute: /usr/bin/gcc
            C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
      Fortran77 compiler: gfortran
  Fortran77 compiler abs: /usr/bin/gfortran
      Fortran90 compiler: gfortran
  Fortran90 compiler abs: /usr/bin/gfortran
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: yes
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
              MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.1.1)
           MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.1.1)
           MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.1.1)
               MCA timer: linux (MCA v1.0, API v1.0, Component v1.1.1)
           MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
           MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
                MCA coll: basic (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: hierarch (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: self (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: sm (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: tuned (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA io: romio (MCA v1.0, API v1.0, Component v1.1.1)
               MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1.1)
              MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: self (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA topo: unity (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
                 MCA gpr: null (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA iof: svc (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA ns: replica (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
                 MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA ras: localhost (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA ras: slurm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1.1)
               MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.1.1)
                MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA rml: oob (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: fork (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: slurm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: env (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: seed (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: singleton (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: slurm (MCA v1.0, API v1.0, Component v1.1.1)

Attachment: config.log.gz
Description: Binary data

Reply via email to