Hello Jeff,

thank you very much for your reply!

> I am unfortunately unable to replicate your problem.  :-(

Today I also tried the code by using gcc (V 4.0.2) to compile OpenMPI,
and with this set up the example is working fine. So I guess there is a
problem when I use the Intel C++ compiler (V9.1.042).

> Can you confirm that you're getting the "right" mpi.h?  That's the most
> obvious problem that I can think of.

Yes, I believe, that the right mpi.h is used (there is actually no other
mpi.h on my computer).

> If it seems to be right, can you compile your program with debugging enabled
> and step through it with a debugger?  A trivial program like this does not
> need to be started via mpirun -- you should be able to just launch it
> directly in a debugger (e.g., put a breakpoint in main() and step into
> MPI::COMM_WORLD.Get_rank()).
> OMPI's C++ bindings are layered on top of the C bindings, so you should step
> into an inlined C++ function that calls MPI_Comm_rank(), and see if the
> communicator that it was invoked with is, indeed, MPI_COMM_WORLD.

I did this and here I found a small problem when the debugger steps into
(comm_inln.h):

inline int MPI::Comm::Get_rank() const
{
  int rank;
  (void)MPI_Comm_rank (mpi_comm, &rank);
  return rank;
}

When I check the value for mpi_comm, it is just null (0x0), so I guess
the communicator is not initialized correctly when the MPI::COMM_WORLD
object is created. For the case of gcc, the value for mpi_comm seems to
be correct. I attached 2 postscript files created with ddd, showing the
MPI::COMM_WORLD object right after MPI::Init (for gcc and for the Intel
compiler).
I will try to have a closer look today or tomorrow, maybe I can figure
out what went wrong (probably just a missing compiler switch).

Thank you,
Tobias

Attachment: dddgraph_intel.ps.gz
Description: GNU Zip compressed data

Attachment: dddgraph_gcc.ps.gz
Description: GNU Zip compressed data

Reply via email to