Hi
I don't understand the error messages, but it seems to me that your
open-MPI version (1.2.5) is rather old.
This might also explain the discrepancies you found in the documentation.
If you can do so, i would suggest you update your Open-MPI.

Jody

On Fri, Apr 17, 2009 at 11:38 PM, Grady Laksmono
<gradyfau...@laksmono.com> wrote:
> Hi, here's what I have:
>
> hello_cxx example
> [hpc@localhost examples]$ mpirun -n 2 hello_cxx
> hello_cxx: Symbol `_ZN3MPI10COMM_WORLDE' has different size in shared
> object, co nsider re-linking
> hello_cxx: Symbol `_ZN3MPI10COMM_WORLDE' has different size in shared
> object, co nsider re-linking
> Hello, world!  I am 0 of 1
> libibverbs: Fatal: couldn't read uverbs ABI version.
> --------------------------------------------------------------------------
> [0,0,0]: OpenIB on host localhost.localdomain was unable to find any HCAs.
> Another transport will be used instead, although this may result in
> lower performance.
> --------------------------------------------------------------------------
> libibverbs: Fatal: couldn't read uverbs ABI version.
> --------------------------------------------------------------------------
> [0,0,0]: OpenIB on host localhost.localdomain was unable to find any HCAs.
> Another transport will be used instead, although this may result in
> lower performance.
> --------------------------------------------------------------------------
> Hello, world!  I am 0 of 1
>
> ring_cxx example
> [hpc@localhost examples]$ mpirun -n 2 ring_cxx
> ring_cxx: Symbol `_ZN3MPI10COMM_WORLDE' has different size in shared object,
> consider re-linking
> ring_cxx: Symbol `_ZN3MPI10COMM_WORLDE' has different size in shared object,
> consider re-linking
> libibverbs: Fatal: couldn't read uverbs ABI version.
> libibverbs: Fatal: couldn't read uverbs ABI version.
> --------------------------------------------------------------------------
> [0,0,0]: OpenIB on host localhost.localdomain was unable to find any HCAs.
> Another transport will be used instead, although this may result in
> lower performance.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> [0,0,0]: OpenIB on host localhost.localdomain was unable to find any HCAs.
> Another transport will be used instead, although this may result in
> lower performance.
> --------------------------------------------------------------------------
> Process 0 sending 10 to 0, tag 201 (1 processes in ring)
> Process 0 sending 10 to 0, tag 201 (1 processes in ring)
> Process 0 sent to 0
> Process 0 sent to 0
> Process 0 decremented value: 9
> Process 0 decremented value: 8
> Process 0 decremented value: 7
> Process 0 decremented value: 6
> Process 0 decremented value: 5
> Process 0 decremented value: 4
> Process 0 decremented value: 3
> Process 0 decremented value: 2
> Process 0 decremented value: 1
> Process 0 decremented value: 0
> Process 0 exiting
> Process 0 decremented value: 9
> Process 0 decremented value: 8
> Process 0 decremented value: 7
> Process 0 decremented value: 6
> Process 0 decremented value: 5
> Process 0 decremented value: 4
> Process 0 decremented value: 3
> Process 0 decremented value: 2
> Process 0 decremented value: 1
> Process 0 decremented value: 0
> Process 0 exiting
>
> which is weird, I'm not sure what's wrong, but one thing that I realized is
> that the documentation for running openmpi is outdated? here's my $PATH and
> $LD_LIBRARY_PATH
>
> [hpc@localhost ~]$ cat .bash_profile
> # .bash_profile
>
> # Get the aliases and functions
> if [ -f ~/.bashrc ]; then
>         . ~/.bashrc
> fi
>
> # User specific environment and startup programs
>
> PATH=$PATH:$HOME/bin:/usr/lib/openmpi/1.2.5-gcc/bin
> LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/openmpi/1.2.5-gcc/lib
>
> export PATH
> export LD_LIBRARY_PATH
> unset USERNAME
>
> It's different that what the documentation had, because there's I couldn't
> find the files in the /opt/openmpi
> I hope that anyone could help?
>
> Thanks a lot!
>
> -- Grady
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to