Hi all I want to migrate my CFD application from MPICH-1.2.4 (ch_p4 device) to OpenMPI-1.4. Hence, I compared the two libraries compiled with my application and I noted OpenMPI is less efficient thant MPICH on ethernet (170min with MPICH against 200min with OpenMPI). So, I wonder if someone has more information/explanation. Here the configure options of OpenMPI: export FC=gfortran export F77=$FC export CC=gcc export PREFIX=/usr/local/bin/openmpi-1.4 ./configure --prefix=$PREFIX --enable-cxx-exceptions --enable-mpi-f77 --enable-mpi-f90 --enable-mpi-cxx --enable-mpi-cxx-seek --enable-dist --enable-mpi-profile --enable-binaries --enable-cxx-exceptions --enable-mpi-threads --enable-memchecker --with-pic --with-threads --with-valgrind --with-libnuma --with-openib Despite my OpenMPI compilation supports OpenIB, I did not specified any mca/btl options because the machine does not have access to a Infiniband interconnect. So, I guess tcp, sm and self are used (or at least something close). Thank you for your help. Mathieu. |
- [OMPI users] OpenMPI less fast than MPICH Mathieu Gontier
- Re: [OMPI users] OpenMPI less fast than MPICH Aaron Knister
- Re: [OMPI users] OpenMPI less fast than MPICH Ralph Castain
- Re: [OMPI users] OpenMPI less fast than MPICH Mathieu Gontier