On Fri, 29 Jan 2010 11:25:09 -0500, Richard Treumann
wrote:
> Any support for automatic serialization of C++ objects would need to be in
> some sophisticated utility that is not part of MPI. There may be such
> utilities but I do not think anyone who has been involved in the discussion
> knows o
Tim wrote:
By serialization, I mean in the context of data storage and transmission. See
http://en.wikipedia.org/wiki/Serialization
e.g. in a structure or class, if there is a pointer pointing to some memory
outside the structure or class, one has to send the content of the memory
besides th
rote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Friday, January 29, 2010, 11:06 AM
> Tim wrote:
>
> > Sorry, my typo. I meant to say OpenMPI documentation.
> >
> Okay. "Open (space) MP
or class, right?
--- On Fri, 1/29/10, Eugene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Friday, January 29, 2010, 11:06 AM
> Tim wrote:
>
> > Sorry, my typo. I meant to say OpenMPI
Tim wrote:
Sorry, my typo. I meant to say OpenMPI documentation.
Okay. "Open (space) MPI" is simply an implementation of the MPI
standard -- e.g., http://www.mpi-forum.org/docs/mpi21-report.pdf . I
imagine an on-line search will turn up a variety of tutorials and
explanations of that stand
with serialization problems?
Are there some good reference for these problems?
--- On Fri, 1/29/10, Eugene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Friday, January 29, 2010, 10:39 AM
> Tim wrote:
Tim wrote:
BTW: I would like to find some official documentation of OpenMP, but there
seems none?
OpenMP (a multithreading specification) has "nothing" to do with Open
MPI (an implementation of MPI, a message-passing specification).
Assuming you meant OpenMP, try their web site: http://o
ubject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Friday, January 29, 2010, 12:50 AM
> Tim wrote:
>
> > Sorry, complicated_computation() and f() are
> simplified too much. They do take more inputs.
> > Among the inputs to
Tim wrote:
Sorry, complicated_computation() and f() are simplified too much. They do take more inputs.
Among the inputs to complicated_computation(), some is passed from the main() to f() by address since it is a big array, some is passed by value, some are created inside f() before the call to
coeff[i], feature); //
> time consuming
> }
> // some operations using all elements in array
> delete [] array;
> }
>
> --- On Thu, 1/28/10, Eugene Loh wrote:
>
> > From: Eugene Loh
> > Subject: Re: [OMPI user
}
--- On Thu, 1/28/10, Eugene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Thursday, January 28, 2010, 11:40 PM
> Tim wrote:
>
> > Thanks Eugene!
> >
> > My case, after sim
Tim wrote:
Thanks Eugene!
My case, after simplified, is to speed up the time-consuming computation in the
loop below by assigning iterations to several nodes in a cluster by MPI. Each
iteration of the loop computes each element of an array. The computation of
each element is independent of o
Hi Tim
Your OpenMP layout suggests that there are no data dependencies
in your "complicated_computation()" and the operations therein
are local.
I will assume this is true in what I suggest.
In MPI you could use MPI_Scatter to distribute the (initial)
array values before the computational loop,
ene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Thursday, January 28, 2010, 8:31 PM
> Tim wrote:
>
> > Thanks, Eugene.
> >
> > I admit I am not that smart to understand
On Thu, 2010-01-28 at 17:05 -0800, Tim wrote:
> Also I only need the loop that computes every element of the array to
> be parallelized. Someone said that the parallel part begins with
> MPI_Init and ends with MPI_Finilize, and one can do any serial
> computations before and/or after these calls. B
omeone could give a sample of how to apply MPI in my case, it will
> clarify a lot of my questions. Usually I can learn a lot from good examples.
>
> Thanks!
>
> --- On Thu, 1/28/10, Eugene Loh wrote:
>
> > From: Eugene Loh
> > Subject: Re: [OMPI users] speed up this
Tim wrote:
Thanks, Eugene.
I admit I am not that smart to understand well how to use MPI, but I did read some basic materials about it and understand how some simple problems are solved by MPI.
But dealing with an array in my case, I am not certain about how to apply MPI
to it. Are you sayi
. Usually I can learn a lot from good examples.
Thanks!
--- On Thu, 1/28/10, Eugene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Thursday, January 28, 2010, 7:30 PM
> Take a look at some introductory
Take a look at some introductory MPI materials to learn how to use MPI
and what it's about. There should be resources on-line... take a look
around.
The main idea is that you would have many processes, each process would
have part of the array. Thereafter, if a process needs data or results
Hi,
(1). I am wondering how I can speed up the time-consuming computation in the
loop of my code below using MPI?
int main(int argc, char ** argv)
{
// some operations
f(size);
// some operations
return 0;
}
void
20 matches
Mail list logo