ers directly. There are
> still many performance issues to be worked out, but just thought I would
> mention it.
>
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Fengguang Song
> Sent: Sunday, June 05,
o figure out how
to configure OpenMPI's mca
parameters to solve the problem...
Thanks,
Fengguang
On Jun 5, 2011, at 2:20 AM, Brice Goglin wrote:
> Le 05/06/2011 00:15, Fengguang Song a écrit :
>> Hi,
>>
>> I'm confronting a problem when using OpenMPI 1.5.1 on a GP
Hi,
I'm confronting a problem when using OpenMPI 1.5.1 on a GPU cluster. My program
uses MPI to exchange data
between nodes, and uses cudaMemcpyAsync to exchange data between Host and GPU
devices within a node.
When the MPI message size is less than 1MB, everything works fine. However,
when the