Dear List,
I was trying to use the C++ binding of OpenMPI, but unfortunately
I ran
into a problem. I'm trying to use MPI::COMM_WORLD, but I always
get the
following error message when I try to run it (compiling works fine):
*** An error occurred in MPI_Comm_rank
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_COMM: invalid communicator
*** MPI_ERRORS_ARE_FATAL (goodbye)
[0,0,0]-[0,1,0] mca_oob_tcp_msg_recv: readv failed with errno=104
1 additional process aborted (not shown)
The code I'm trying to use is:
----------------------------------------------------
// testcpp.cpp
// mpic++ testcpp.cpp -o testcpp
// mpiexec -np 2 ./testcpp
#include "mpi.h"
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
int process_id; // rank of process
int process_num; // total number of processes
MPI::Init ( argc, argv );
process_id = MPI::COMM_WORLD.Get_rank ();
process_num = MPI::COMM_WORLD.Get_size ();
cout << process_id+1 << "/" << process_num << endl;
MPI::Finalize();
}
----------------------------------------------------
A similar program using the normal C interface (also compiled with
mpic++) works fine (File: testc.cpp).
For this example I'm using the Intel C/C++ V9.1 compiler on Linux
(Ubuntu 5.10). I compiled openmpi by myself, so maybe something went
wrong there. I added config.log and also the output from
ompi_info. If
necessary, I can also provide a capture of the configuration,
compilation and installation process.
Best Regards,
Tobias
Open MPI: 1.1.1
Open MPI SVN revision: r11473
Open RTE: 1.1.1
Open RTE SVN revision: r11473
OPAL: 1.1.1
OPAL SVN revision: r11473
Prefix: /opt/libs/openmpi-1.1.1_intel9.1
Configured architecture: i686-pc-linux-gnu
Configured by: tgraf
Configured on: Thu Aug 31 14:52:07 JST 2006
Configure host: tobias
Built by: tgraf
Built on: Thu Aug 31 15:05:52 JST 2006
Built host: tobias
C bindings: yes
C++ bindings: yes
Fortran77 bindings: yes (all)
Fortran90 bindings: yes
Fortran90 bindings size: small
C compiler: icc
C compiler absolute: /opt/intel/cc/9.1.042/bin/icc
C++ compiler: icpc
C++ compiler absolute: /opt/intel/cc/9.1.042/bin/icpc
Fortran77 compiler: ifort
Fortran77 compiler abs: /opt/intel/fc/9.1.036/bin/ifort
Fortran90 compiler: ifort
Fortran90 compiler abs: /opt/intel/fc/9.1.036/bin/ifort
C profiling: yes
C++ profiling: yes
Fortran77 profiling: yes
Fortran90 profiling: yes
C++ exceptions: no
Thread support: posix (mpi: no, progress: no)
Internal debug support: no
MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
libltdl support: yes
MCA memory: ptmalloc2 (MCA v1.0, API v1.0,
Component v1.1.1)
MCA paffinity: linux (MCA v1.0, API v1.0, Component
v1.1.1)
MCA maffinity: first_use (MCA v1.0, API v1.0,
Component v1.1.1)
MCA timer: linux (MCA v1.0, API v1.0, Component
v1.1.1)
MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
MCA allocator: bucket (MCA v1.0, API v1.0, Component
v1.0)
MCA coll: basic (MCA v1.0, API v1.0, Component
v1.1.1)
MCA coll: hierarch (MCA v1.0, API v1.0, Component
v1.1.1)
MCA coll: self (MCA v1.0, API v1.0, Component
v1.1.1)
MCA coll: sm (MCA v1.0, API v1.0, Component v1.1.1)
MCA coll: tuned (MCA v1.0, API v1.0, Component
v1.1.1)
MCA io: romio (MCA v1.0, API v1.0, Component
v1.1.1)
MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1.1)
MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1.1)
MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1.1)
MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1.1)
MCA btl: self (MCA v1.0, API v1.0, Component
v1.1.1)
MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.1)
MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
MCA topo: unity (MCA v1.0, API v1.0, Component
v1.1.1)
MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
MCA gpr: null (MCA v1.0, API v1.0, Component
v1.1.1)
MCA gpr: proxy (MCA v1.0, API v1.0, Component
v1.1.1)
MCA gpr: replica (MCA v1.0, API v1.0, Component
v1.1.1)
MCA iof: proxy (MCA v1.0, API v1.0, Component
v1.1.1)
MCA iof: svc (MCA v1.0, API v1.0, Component v1.1.1)
MCA ns: proxy (MCA v1.0, API v1.0, Component
v1.1.1)
MCA ns: replica (MCA v1.0, API v1.0, Component
v1.1.1)
MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
MCA ras: dash_host (MCA v1.0, API v1.0,
Component v1.1.1)
MCA ras: hostfile (MCA v1.0, API v1.0, Component
v1.1.1)
MCA ras: localhost (MCA v1.0, API v1.0,
Component v1.1.1)
MCA ras: slurm (MCA v1.0, API v1.0, Component
v1.1.1)
MCA rds: hostfile (MCA v1.0, API v1.0, Component
v1.1.1)
MCA rds: resfile (MCA v1.0, API v1.0, Component
v1.1.1)
MCA rmaps: round_robin (MCA v1.0, API v1.0,
Component v1.1.1)
MCA rmgr: proxy (MCA v1.0, API v1.0, Component
v1.1.1)
MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1.1)
MCA rml: oob (MCA v1.0, API v1.0, Component v1.1.1)
MCA pls: fork (MCA v1.0, API v1.0, Component
v1.1.1)
MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1.1)
MCA pls: slurm (MCA v1.0, API v1.0, Component
v1.1.1)
MCA sds: env (MCA v1.0, API v1.0, Component v1.1.1)
MCA sds: seed (MCA v1.0, API v1.0, Component
v1.1.1)
MCA sds: singleton (MCA v1.0, API v1.0,
Component v1.1.1)
MCA sds: pipe (MCA v1.0, API v1.0, Component
v1.1.1)
MCA sds: slurm (MCA v1.0, API v1.0, Component
v1.1.1)
// testc.cpp
// mpic++ testc.cpp -o testc
// mpiexec -np 2 ./testc
#include "mpi.h"
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
int process_id; // rank of process
int process_num; // total number of processes
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&process_num);
MPI_Comm_rank(MPI_COMM_WORLD,&process_id);
cout << process_id+1 << "/" << process_num << endl;
MPI_Finalize();
}
// testcpp.cpp
// mpic++ testcpp.cpp -o testcpp
// mpiexec -np 2 ./testcpp
#include "mpi.h"
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
int process_id; // rank of process
int process_num; // total number of processes
MPI::Init ( argc, argv );
process_id = MPI::COMM_WORLD.Get_rank ();
process_num = MPI::COMM_WORLD.Get_size ();
cout << process_id+1 << "/" << process_num << endl;
MPI::Finalize();
}
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users