en.sh
> ./configure --prefix=/mpi/openmpi-1.5.4 --with-openib CC=icc CXX=icpc
> F77=ifort FC=ifort --with-knem=/opt/knem
>
>
>
> From: Eugene Loh
> To: Open MPI Users
> Cc: Eric Feng
> Sent: Wednesday, December 28, 2011 1:58 AM
> Subject: Re: [OMPI users] Openmpi per
pi/openmpi-1.5.4 --with-openib CC=icc CXX=icpc F77=ifort FC=ifort
--with-knem=/opt/knem
From: Eugene Loh
To: Open MPI Users
Cc: Eric Feng
Sent: Wednesday, December 28, 2011 1:58 AM
Subject: Re: [OMPI users] Openmpi performance issue
If I remember corre
If I remember correctly, both Intel MPI and MVAPICH2 bind processes by
default. OMPI does not. There are many cases where the "bind by
default" behavior gives better default performance. (There are also
cases where it can give catastrophically worse performance.) Anyhow, it
seems possible t
It depends a lot on the application and how you ran it. Can you provide some
info? For example, if you oversubscribed the node, then we dial down the
performance to provide better cpu sharing. Another point: we don't bind
processes by default while other MPIs do. Etc.
So more info (like the mpi
Can anyone help me?
I got similar performance issue when comparing to mvapich2 which is much faster
in each MPI function in real application but similar in IMB benchmark.
From: Eric Feng
To: "us...@open-mpi.org"
Sent: Friday, December 23, 2011 9:12 PM
Subject