Hi Tom,
        sorry to add something in the same vein as Eugene's reply. i think
this is an excellent resource
http://ci-tutor.ncsa.illinois.edu/login.php. It's a great online course and
detailed! Before I took proper classes, this helped me a lot!!

On Thu, Jan 28, 2010 at 7:05 PM, Tim <timlee...@yahoo.com> wrote:

> Thanks, Eugene.
>
> I admit I am not that smart to understand well how to use MPI, but I did
> read some basic materials about it and understand how some simple problems
> are solved by MPI.
>
> But dealing with an array in my case, I am not certain about how to apply
> MPI to it. Are you saying to use send and recieve to transfer the value
> computed for each element from child process to parent process? Do you
> allocate a copy of the array for each process?
>
> Also I only need the loop that computes every element of the array to be
> parallelized. Someone said that the parallel part begins with MPI_Init and
> ends with MPI_Finilize, and one can do any serial computations before and/or
> after these calls. But I have wrote some MPI programs, and found that the
> parallel part is not restricted between MPI_Init and MPI_Finilize, but
> instead the whole program. If the rest part of the code has to be wrapped
> for process with ID 0, I have little idea about how to apply that to my case
> since the rest part would be the parts before and after the loop in the
> function and the whole in main().
>
> If someone could give a sample of how to apply MPI in my case, it will
> clarify a lot of my questions. Usually I can learn a lot from good examples.
>
> Thanks!
>
> --- On Thu, 1/28/10, Eugene Loh <eugene....@sun.com> wrote:
>
> > From: Eugene Loh <eugene....@sun.com>
> > Subject: Re: [OMPI users] speed up this problem by MPI
> > To: "Open MPI Users" <us...@open-mpi.org>
> > Date: Thursday, January 28, 2010, 7:30 PM
> > Take a look at some introductory MPI
> > materials to learn how to use MPI and what it's about.
> > There should be resources on-line... take a look around.
> >
> > The main idea is that you would have many processes, each
> > process would have part of the array.  Thereafter, if a
> > process needs data or results from any other process, such
> > data would have to be exchanged between the processes
> > explicitly.
> >
> > Many codes have both OpenMP and MPI parallelization, but
> > you should first familiarize yourself with the basics of MPI
> > before dealing with "hybrid" codes.
> >
> > Tim wrote:
> >
> > > Hi,
> > >
> > > (1). I am wondering how I can speed up the
> > time-consuming computation in the loop of my code below
> > using MPI?
> > >       int main(int argc, char
> > ** argv)       {
> >    // some operations
> >          f(size);
> >            // some
> > operations
> >    return 0;
> >    }
> >   void f(int size)
> >    {       // some
> > operations
> > int i;
> >    double * array =  new double
> > [size];
> >    for (i = 0; i < size; i++) // how can I
> > use MPI to speed up this loop to compute all elements in the
> > array?       {
> >    array[i] = complicated_computation(); //
> > time comsuming computation
> >    }
> >    // some operations using all elements in
> > array
> >    delete [] array;      }
> > >
> > > As shown in the code, I want to do some operations
> > before and after the part to be paralleled with MPI, but I
> > don't know how to specify where the parallel part begins and
> > ends.
> > >
> > > (2) My current code is using OpenMP to speed up the
> > comutation.
> > >     void f(int size)
> >    {       // some
> > operations
> >    int i;
> >      double * array =  new double
> > [size];
> >    omp_set_num_threads(_nb_threads);
> >     #pragma omp parallel shared(array)
> > private(i)      {
> > >     #pragma omp for
> > schedule(dynamic) nowait
> >     for (i = 0; i < size; i++) // how can I use
> > MPI to speed up this loop to compute all elements in the
> > array?       {
> >    array[i] = complicated_computation(); //
> > time comsuming computation
> >    }
> >   }     // some operations using
> > all elements in array
> >      }
> > >
> > > I wonder if I change to use MPI, is it possible to
> > have the code written both for OpenMP and MPI? If it is
> > possible, how to write the code and how to compile and run
> > the code?
> > >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
>
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to