Thanks, I understand what you are saying. But my query is regarding the design of MPI_AllReduce for shared-memory systems. I mean is there any different logic/design of MPI_AllReduce when OpenMPI is used on shared-memory systems? The standard MPI_AllReduce says, 1. Each MPI process sends its value (and WAIT for others to send) 2. Values from all the processes is combined 3. Computed result is sent back to all processes (all LEAVE) Does OpenMPI implement the same logic/design for shared-memory system or it has some other way of doing it for shared-memory?
-Thanks, Sarang. Quoting "Yuan, Huapeng" <yu...@indiana.edu>: > HI, > > I think it has nothing to do with shared memory. It just has > something > to do with process or thread. > So, with interprocess, you can use mpi in shared memory (multicore or > distributed shared memory). But for multiple threads in the same > process, it cannot be used. > > > Hope this helps. > > > Quoting smai...@ksu.edu: > > > Can anyone help on this? > > > > -Thanks, > > Sarang. > > > > Quoting smai...@ksu.edu: > > > >> Hi, > >> I am doing a research on parallel techniques for shared-memory > >> systems(NUMA). I understand that OpenMPI is intelligent to utilize > >> shared-memory system and it uses processor-affinity. Is the > OpenMPI > >> design of MPI_AllReduce "same" for shared-memory (NUMA) as well as > >> distributed system? Can someone please tell me MPI_AllReduce > design, > >> in > >> brief, in terms of processes and their interaction on > shared-memory? > >> Else please suggest me a good reference for this. > >> > >> -Thanks, > >> Sarang. > >> > >> _______________________________________________ > >> users mailing list > >> us...@open-mpi.org > >> http://www.open-mpi.org/mailman/listinfo.cgi/users > >> > >> > > > > > > _______________________________________________ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > > >