At 15:59 08/05/2012, you wrote:
Yep you are correct. I did the same and it worked. When I have more
than 3 MPI tasks there is lot of overhead on GPU.
But for CPU there is not overhead. All three machines have 4 quad
core processors with 3.8 GB RAM.
Just wondering why there is no degradation
come out better.
>
> ** **
>
> ** **
>
> *From:* users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] *On
> Behalf Of *Rohan Deshpande
> *Sent:* Monday, May 07, 2012 9:38 PM
> *To:* Open MPI Users
> *Subject:* [OMPI users] GPU and CPU timing - OpenMPI and Thrust
>
&g
Behalf
Of Rohan Deshpande
Sent: Monday, May 07, 2012 9:38 PM
To: Open MPI Users
Subject: [OMPI users] GPU and CPU timing - OpenMPI and Thrust
I am running MPI and Thrust code on a cluster and measuring time for
calculations.
My MPI code -
#include "mpi.h"
#include
#include
#include
I am running MPI and Thrust code on a cluster and measuring time for
calculations.
My MPI code -
#include "mpi.h"
#include
#include
#include
#include
#include
#include
#define MASTER 0
#define ARRAYSIZE 2000
int
*masterarray,*onearray,*twoarray,*threearray,*fourarray,*fivearray,*s