Take a look at some introductory MPI materials to learn how to use MPI
and what it's about. There should be resources on-line... take a look
around.
The main idea is that you would have many processes, each process would
have part of the array. Thereafter, if a process needs data or results
from any other process, such data would have to be exchanged between the
processes explicitly.
Many codes have both OpenMP and MPI parallelization, but you should
first familiarize yourself with the basics of MPI before dealing with
"hybrid" codes.
Tim wrote:
Hi,
(1). I am wondering how I can speed up the time-consuming computation in the
loop of my code below using MPI?
int main(int argc, char ** argv)
{
// some operations
f(size);
// some operations
return 0;
}
void f(int size)
{
// some operations
int i;
double * array = new double [size];
for (i = 0; i < size; i++) // how can I use MPI to speed up this loop to compute all elements in the array?
{
array[i] = complicated_computation(); // time comsuming computation
}
// some operations using all elements in array
delete [] array;
}
As shown in the code, I want to do some operations before and after the part to
be paralleled with MPI, but I don't know how to specify where the parallel part
begins and ends.
(2) My current code is using OpenMP to speed up the comutation.
void f(int size)
{
// some operations
int i;
double * array = new double [size];
omp_set_num_threads(_nb_threads);
#pragma omp parallel shared(array) private(i)
{
#pragma omp for schedule(dynamic) nowait
for (i = 0; i < size; i++) // how can I use MPI to speed up this loop to compute all elements in the array?
{
array[i] = complicated_computation(); // time comsuming computation
}
}
// some operations using all elements in array
}
I wonder if I change to use MPI, is it possible to have the code written both
for OpenMP and MPI? If it is possible, how to write the code and how to compile
and run the code?