Thank you very much to react.
stdout/stderr (or the fortran equivalents) are indeed used to follow
the progression, but during my bench, they are directed in a file (2>$1
| tee log). But I do not understand how it can influence OpenMPI?
Aaron Knister wrote:
Does your application do a lot o
More likely the difference is due to the fact that MPICH does some automatic
process binding and OMPI doesn't. There have been a number of discussions on
this list about this issue. You might search the list for "MPICH" to find them.
See the OMPI FAQ on affinity:
http://www.open-mpi.org/faq/?ca
Does your application do a lot of printing to stdout/stderr?
On Jan 11, 2010, at 8:00 AM, Mathieu Gontier wrote:
> Hi all
>
> I want to migrate my CFD application from MPICH-1.2.4 (ch_p4 device) to
> OpenMPI-1.4. Hence, I compared the two libraries compiled with my application
> and I noted Op
Hi all
I want to migrate my CFD application from MPICH-1.2.4 (ch_p4 device) to
OpenMPI-1.4. Hence, I compared the two libraries compiled with my
application and I noted OpenMPI is less efficient thant MPICH on
ethernet (170min with MPICH against 200min with OpenMPI). So, I wonder
if someone h