Hello,
we are seeing a large difference in performance for some applications
depending on what MPI is being used.
Attached are performance numbers and oprofile output (first 30 lines)
from one out of 14 nodes from one application run using OpenMPI,
IntelMPI and Scali MPI respectively.
Scali
winner in terms of number of calls.
/Torgny
Pavel Shamis (Pasha) wrote:
Do you know if the application use some collective operations ?
Thanks
Pasha
Torgny Faxen wrote:
Hello,
we are seeing a large difference in performance for some applications
depending on what MPI is being used.
Attached
remembered to
compile your application with a -O3 flag - i.e., "mpicc -O3 ...".
Remember, OMPI does not automatically add optimization flags to mpicc!
Thanks
Ralph
On Wed, Aug 5, 2009 at 7:15 AM, Torgny Faxen <mailto:fa...@nsc.liu.se>> wrote:
Pasha,
no collectives are
ll us to bind or else you lose a lot of performace. Set -mca
opal_paffinity_alone 1 on your cmd line and it should make a
significant difference.
On Wed, Aug 5, 2009 at 8:10 AM, Torgny Faxen <mailto:fa...@nsc.liu.se>> wrote:
Ralph,
I am running through a locally provided wra
Pasha,
se attached file.
I have traced how MPI_IPROBE is called and also managed to significantly
reduce the number of calls to MPI_IPROBE. Unfortunately this only
resulted in the program spending time in other routines. Basically the
code runs through a number of timesteps and after each time