Hi,
I built OpenMPI 1.8.3 using PGI 14.7 and enabled CUDA support for CUDA 6.0. I
have a Fortran test code that tests GPUDirect and have included it here. When
I run it across 2 nodes using 4 MPI procs, sometimes it fails with incorrect
results. Specifically, sometimes rank 1 does not receiv
This may well be related to:
https://github.com/open-mpi/ompi/issues/369
> On Feb 10, 2015, at 9:24 AM, Riccardo Zese wrote:
>
> Hi,
> I'm trying to modify an old algorithm of mine in order to exploit
> parallelization and I would like to use MPI. My algorithm is written in Java
> and ma
We have a cluster which is a combination of Myrinet and Infiniband nodes.
We are using a common openmpi 1.8.4 software tree accessible to both set of
nodes.
Our Infiniband nodes do NOT have the libmyriexpress.so library installed (since
it is not needed).
Likewise the Myrinet nodes do not ha
Let me try to reproduce this. This should not have anything to do with GPU
Direct RDMA. However, to eliminate it, you could run with:
--mca btl_openib_want_cuda_gdr 0.
Rolf
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Aulwes, Rob
Sent: Wednesday, February 11, 2015 2:17 PM
To: u
Hi all
I have figured this out. For anyone who needs to at run time disable a single
item default MCA value, the solution is to set.
OMPI_MCA_mtl=^mx
OMPI_MCA_btl=self,sm,openib,tcp
as mentioned
OMPI_MCA_mtl=""
will not do the job.
Avalon Johnson
ITS HPCC
USC
"It take