On Jun 15, 2012, at 4:38 PM, Ramesh Vinayagam wrote: > I was looking more into this scenario : > > if (rank ==0){ > MPI_Send(&tmp,2048, MPI_INT, 1,123, myComm); > MPI_Recv(&tmp2,2048, MPI_INT, 1,321, > MPI_COMM_WORLD, MPI_STATUS_IGNORE); > } > if (rank == 1){ > MPI_Send(&tmp1,2048, MPI_INT, 0,321, MPI_COMM_WORLD); > MPI_Recv(&tmp3,2048, MPI_INT, 0,123, myComm, MPI_STATUS_IGNORE); > > } > > This scenario in a normal case will lead to a deadlock. But I was wondering > whether multiple communicators will solve this issue, but apparently it > doesn't.
No, it does not. You can use non-blocking communications, instead. > I tried doing sends and receives on different threads, but that did not help > too. So I was wondering if there is a way to handle this in MPI without > using non-blocking sends and receives. You could use MPI_Sendrecv(). > Thanks > Ramesh > > > On Fri, Jun 15, 2012 at 3:40 AM, Jeff Squyres <jsquy...@cisco.com> wrote: > On Jun 14, 2012, at 8:43 PM, Ramesh Vinayagam wrote: > > > I was wondering is there a way to communicate between two processes with > > two different communicators simultaneously in MPI? Like having two channels > > for communication? > > I'm not quite sure what you're asking. Are you asking if it's possible to > have 2 processes share 2 entirely different communicators (and use both of > them for communication)? > > If so, yes. Any set of processes can have any number of shared > communicators. For example: > > MPI_Comm foo; > MPI_Comm_dup(MPI_COMM_WORLD, &foo); > > Now foo will be a duplicate of MPI_COMM_WORLD, but with a different > communication context (so that messages sent on MCW won't be received on foo, > and vice versa). Hence, you can send a message on MCW to any peer in that > communicator, but you can also send a message on foo to any peer in that > communicator. > > Note, however, that sending multiple messages on different communicators to > the same peer doesn't (usually) expand your available bandwidth. Think of > communicators (and tags, too), as message matching mechanisms rather than > bandwidth-multiplying mechanisms. For example, you might send control > messages on "foo", but send data messages on MPI_COMM_WORLD. > > Hope that helps. > > -- > Jeff Squyres > jsquy...@cisco.com > For corporate legal information go to: > http://www.cisco.com/web/about/doing_business/legal/cri/ > > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/