MPI 2 does not say anything about memory sharing.

In Open MPI, each MPI process (i.e., each unique rank in MPI_COMM_WORLD) will 
have a completely separate memory space.  So the malloc() that you do in MCW 
rank 0 will be totally different than the malloc() that you do in MCW rank 1.

Make sense?

On May 16, 2012, at 8:08 AM, Rohan Deshpande wrote:

> I have following structure of  MPI code - 
> 
> int main(){
>  
> MPI_INIT.....
> //initialize MPI
> data = malloc(sizeof(int)*200);
>     //initialize data
>    /*--------Master---------*/
>   if(taskid == 0){
> 
>     //send data to each slave
>     MPI_SEND....
>    }
> 
>    /*----Slaves-------*/  
>   if(taskid > 0){
> 
>    //accept data from master  
>   MPI_RECV....
>   //do some calculations
>  }
> 
>  MPI_Finalize()
> }
> 
> I have few doubts about the code above. 
> In above code, the data is allocated memory in the main program. If I run 
> this program on cluster where
> node 1 is slave and node 0 is master, does node 1 actually shares the memory 
> location of node 0 to perform the calculations?
> 
> If I do not want to share the memory, how can i make task on node 1 work 
> independently ?
> 
> Thanks in advance.
> 
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to