Yes. thanks :)
On Fri, May 18, 2012 at 7:06 AM, Jeff Squyres wrote:
> MPI 2 does not say anything about memory sharing.
>
> In Open MPI, each MPI process (i.e., each unique rank in MPI_COMM_WORLD)
> will have a completely separate memory space. So the malloc() that you do
> in MCW rank 0 will b
You shouldn't use MPI_THREAD_MULTIPLE in Open MPI 1.4.x -- you should upgrade
to 1.6. THREAD_MULTIPLE is a bit more robust in the 1.6 series for the TCP
BTL. See the README for more info on THREAD_MULTIPLE.
On May 16, 2012, at 7:17 PM, devendra rai wrote:
> Hello Community,
>
> I just finis
MPI 2 does not say anything about memory sharing.
In Open MPI, each MPI process (i.e., each unique rank in MPI_COMM_WORLD) will
have a completely separate memory space. So the malloc() that you do in MCW
rank 0 will be totally different than the malloc() that you do in MCW rank 1.
Make sense?
You probably want MPI_Reduce, instead.
http://www.open-mpi.org/doc/v1.6/man3/MPI_Reduce.3.php
On May 15, 2012, at 11:27 PM, Rohan Deshpande wrote:
> I am performing Prefix scan operation on cluster
>
> I have 3 MPI tasks and master task is responsible for distributing the data
>
> Now, ea