“Supposedly faster” isn’t a particularly good reason to change MPI
implementations but canceling sends is hard for reasons that have nothing
to do with performance.
Also, I’d not be so eager to question the effectiveness of Open-MPI on
InfiniBand. Check the commit logs for Mellanox employees some
Don’t try to cancel sends.
https://github.com/mpi-forum/mpi-issues/issues/27 has some useful info.
Jeff
On Wed, Oct 2, 2019 at 7:17 AM Christian Von Kutzleben via users <
users@lists.open-mpi.org> wrote:
> Hi,
>
> I’m currently evaluating to use openmpi (4.0.1) in our application.
>
> We are us
Hi Christian,
I would suggest using mvapich2 instead. It is supposedly faster than OpenMpi on
infiniband and it seems to have fewer options under the hood which means less
things you have to tweak to get it working for you.
Regards,
Emyr James
Head of Scientific IT
CRG -Centre for Genomic R