ers directly. There are
> still many performance issues to be worked out, but just thought I would
> mention it.
>
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Fengguang Song
> Sent: Sunday, June 05,
.
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Fengguang Song
Sent: Sunday, June 05, 2011 9:54 AM
To: Open MPI Users
Subject: Re: [OMPI users] Program hangs when using OpenMPI and CUDA
Hi Brice,
Thank you! I saw your previous discussion
Hi Brice,
Thank you! I saw your previous discussion and actually have tried "--mca
btl_openib_flags 304".
It didn't solve the problem unfortunately. In our case, the MPI buffer is
different from the cudaMemcpy
buffer and we do manually copy between them. I'm still trying to figure out how
to co
Le 05/06/2011 00:15, Fengguang Song a écrit :
> Hi,
>
> I'm confronting a problem when using OpenMPI 1.5.1 on a GPU cluster. My
> program uses MPI to exchange data
> between nodes, and uses cudaMemcpyAsync to exchange data between Host and GPU
> devices within a node.
> When the MPI message size
Hi,
I'm confronting a problem when using OpenMPI 1.5.1 on a GPU cluster. My program
uses MPI to exchange data
between nodes, and uses cudaMemcpyAsync to exchange data between Host and GPU
devices within a node.
When the MPI message size is less than 1MB, everything works fine. However,
when the