More likely the difference is due to the fact that MPICH does some automatic 
process binding and OMPI doesn't. There have been a number of discussions on 
this list about this issue. You might search the list for "MPICH" to find them.

See the OMPI FAQ on affinity:

http://www.open-mpi.org/faq/?category=tuning#using-paffinity

On Jan 13, 2010, at 6:01 PM, Aaron Knister wrote:

> Does your application do a lot of printing to stdout/stderr?
> 
> On Jan 11, 2010, at 8:00 AM, Mathieu Gontier wrote:
> 
>> Hi all
>> 
>> I want to migrate my CFD application from MPICH-1.2.4 (ch_p4 device) to 
>> OpenMPI-1.4. Hence, I compared the two libraries compiled with my 
>> application and I noted OpenMPI is less efficient thant MPICH on ethernet 
>> (170min with MPICH against 200min with OpenMPI). So, I wonder if someone has 
>> more information/explanation.
>> 
>> Here the configure options of OpenMPI:
>> 
>> export FC=gfortran
>> export F77=$FC
>> export CC=gcc
>> export PREFIX=/usr/local/bin/openmpi-1.4
>> ./configure --prefix=$PREFIX --enable-cxx-exceptions --enable-mpi-f77 
>> --enable-mpi-f90 --enable-mpi-cxx --enable-mpi-cxx-seek --enable-dist 
>> --enable-mpi-profile --enable-binaries --enable-cxx-exceptions 
>> --enable-mpi-threads --enable-memchecker --with-pic --with-threads 
>> --with-valgrind --with-libnuma --with-openib
>> 
>> Despite my OpenMPI compilation supports OpenIB, I did not specified any 
>> mca/btl options because the machine does not have access to a Infiniband 
>> interconnect. So, I guess tcp, sm and self are used (or at least something 
>> close).
>> 
>> Thank you for your help.
>> Mathieu.
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to