To maintain compatibility with a major HPC center I upgraded(?) from OpenMPI 1.1.4 to OpenMPI 1.2 on my local cluster.

In testing on my local cluster, 13 dual-Opteron Linux boxes with dual gigabit ethernet, I discovered that my program runs slower using OpenMPI 1.2 then OpenMPI 1.1.4 (780.3 versus 402.4 seconds with 3 processes--tested twice to be certain).

This particular version of my program was designed to minimize the amount of communications and the only MPI calls that get used a lot are MPI_SEND and MPI_RECV with MPI_PACKED data (so MPI_PACK and MPI_UNPACK also get used a lot).

Was there a known problem with OpenMPI 1.2 (r14027) and ethernet communication that got fixed later?

The same executable run at the major center seems fine, but they have Myrinet.

Michael

Reply via email to