Hi, On 03/03/17 12:41, Mark Dixon wrote:
Your 20% memory bandwidth performance hit on 2.x and the OPA problem are concerning - will look at that. Are there tickets open for them?
OPA performance issue on CP2K (15x slowdown) : https://www.mail-archive.com/users@lists.open-mpi.org//msg30593.html (cf. the thread) workaround is to disable IB failback on OPA, > --mca btl ^tcp,openibWith this tweak on OPA, OpenMPI's CP2K is less than 10% slower than Intel MPI's (the same result as on InfiniBand) - which is much much better that 1500%, huh. However Open MPI's CP2K still stays slower than Intel MPI's due to worse MPI_Alltoallv, as far as I understood the profiles.
I will mail to CP2K developers soon... 20% bandwidth with Open MPI 2.x: cf. https://www.mail-archive.com/devel@lists.open-mpi.org/msg00043.html- Nathan Hjelm mean the hooks are removed by intention. We have a (nasty) workaround, cf.
https://www.mail-archive.com/devel@lists.open-mpi.org/msg00052.html As far as I can see this issue is on InfiniBand only. Best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users