Hi,my experience is that OpenMPI has slightly less latency and less bandwidth than Intel MPI (which is based on mpich2) using InfiniBand.
I don't remember the numbers using shared memory.
As you are seeing a huge difference, I would suspect that either something with your compilation is strange or more probable that you hit the cc-numa effect of the Opteron. You might want to bind the MPI processes (and even clean the filesystem caches) to avoid that effect.
best regards, Samuel Sangamesh B wrote:
Hi All,I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers.After this benchmark, I came to know that OpenMPI is slower than MPICH2.This benchmark is run on a AMD dual core, dual opteron processor. Both have compiled with default configurations.The job is run on 2 nodes - 8 cores. OpenMPI - 25 m 39 s. MPICH2 - 15 m 53 s. Any comments ..? Thanks, Sangamesh
smime.p7s
Description: S/MIME Cryptographic Signature