Thank you very much to react.
stdout/stderr (or the fortran equivalents) are indeed used to follow the progression, but during my bench, they are directed in a file (2>$1 | tee log). But I do not understand how it can influence OpenMPI?

Aaron Knister wrote:
Does your application do a lot of printing to stdout/stderr?

On Jan 11, 2010, at 8:00 AM, Mathieu Gontier wrote:

Hi all

I want to migrate my CFD application from MPICH-1.2.4 (ch_p4 device) to OpenMPI-1.4. Hence, I compared the two libraries compiled with my application and I noted OpenMPI is less efficient thant MPICH on ethernet (170min with MPICH against 200min with OpenMPI). So, I wonder if someone has more information/explanation.

Here the configure options of OpenMPI:

export FC=gfortran
export F77=$FC
export CC=gcc
export PREFIX=/usr/local/bin/openmpi-1.4
./configure --prefix=$PREFIX --enable-cxx-exceptions --enable-mpi-f77 --enable-mpi-f90 --enable-mpi-cxx --enable-mpi-cxx-seek --enable-dist --enable-mpi-profile --enable-binaries --enable-cxx-exceptions --enable-mpi-threads --enable-memchecker --with-pic --with-threads --with-valgrind --with-libnuma --with-openib

Despite my OpenMPI compilation supports OpenIB, I did not specified any mca/btl options because the machine does not have access to a Infiniband interconnect. So, I guess tcp, sm and self are used (or at least something close).

Thank you for your help.
Mathieu.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users



Reply via email to