Hi Mark,
On 02/18/17 09:14, Mark Dixon wrote:
On Fri, 17 Feb 2017, r...@open-mpi.org wrote:Depends on the version, but if you are using something in the v2.x range, you should be okay with just one installed version
How good is MPI_THREAD_MULTIPLE support these days and how far up the wishlist is it, please?
Note that on 1.10.x series (even on 1.10.6), enabling of MPI_THREAD_MULTIPLE in lead to (silent) shutdown of the InfiniBand fabric for that application => SLOW!
2.x versions (tested: 2.0.1) handle MPI_THREAD_MULTIPLE on InfiniBand the right way up, however due to absence of memory hooks (= nut aligned memory allocation) we get 20% less bandwidth on IB with 2.x versions compared to 1.10.x versions of Open MPI (regardless with or without support of MPI_THREAD_MULTIPLE).
On Intel OmniPath network both above issues seem to be not present, but due to a performance bug in MPI_Free_mem your application can be horribly slow (seen: CP2K) if the InfiniBand failback of OPA not disabled manually, see
https://www.mail-archive.com/users@lists.open-mpi.org//msg30593.html Best, Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users