Hi,
I’m currently evaluating to use openmpi (4.0.1) in our application.
We are using a construct like this for some cleanup functionality, to cancel
some Send requests:
if (*req != MPI_REQUEST_NULL) {
MPI_Cancel(req);
MPI_Wait(req, MPI_STATUS_IGNORE);
assert(*req == MPI_REQUEST_NULL);
}
Howeve
Hi Christian,
I would suggest using mvapich2 instead. It is supposedly faster than OpenMpi on
infiniband and it seems to have fewer options under the hood which means less
things you have to tweak to get it working for you.
Regards,
Emyr James
Head of Scientific IT
CRG -Centre for Genomic R
Hello all,
OS: CentOS 7.7
OFED: MLNX_OFED_LINUX-4.7-1.0.0.1
Running the command "make all install" returns:
In file included from btl_uct_device_context.h:16:0,
from btl_uct_component.c:40:
btl_uct_rdma.h: In function 'mca_btl_uct_get_rkey':
btl_uct_rdma.h
Hello all. I will like request you a practical example about to how to set with
MPI_Info_set(&info, …) so that “info” passed to MPI_Comm_spawn() not spawns
local any process (let say “master” host), but yes in a slave (“slave” host),
without using mpirun (just “./o.out”). Im using OpenMPI 4.0.1.
We are now using OpenMPI 4.0.2RC2 and RC3 compiled (with Intel, PGI and
GCC) with MLNX_OFED 4.7 (released a couple days ago). This supplies UCX
1.7. So far, it seems like things are working well.
Any estimate on when OpenMPI 4.2 will be released?
On 9/25/19 2:27 PM, Jeff Squyres (jsquyres)
Don’t try to cancel sends.
https://github.com/mpi-forum/mpi-issues/issues/27 has some useful info.
Jeff
On Wed, Oct 2, 2019 at 7:17 AM Christian Von Kutzleben via users <
users@lists.open-mpi.org> wrote:
> Hi,
>
> I’m currently evaluating to use openmpi (4.0.1) in our application.
>
> We are us
“Supposedly faster” isn’t a particularly good reason to change MPI
implementations but canceling sends is hard for reasons that have nothing
to do with performance.
Also, I’d not be so eager to question the effectiveness of Open-MPI on
InfiniBand. Check the commit logs for Mellanox employees some