On Sep 3, 2008, at 6:11 PM, Vincent Rotival wrote:
Eugene, No what I'd like is that when doing something like call mpi_bcast(data, 1, MPI_INTEGER, 0, .....)the program continues AFTER the Bcast is completed (so no control returned to user), but while threads with rank > 0 are waiting in Bcast they are not taking CPU resources
Threads with rank > 0 ? Now, this scares me !!! If all your threads are going in the bcast, then I guess the application is not correct from the MPI standard perspective (i.e. on each communicator there is only one collective at every moment). In MPI, each process (and not each thread) has a rank, and each process exists in each communicator only once. In other words, as each collective is bounded to a specific communicator, on each of your processes, only one thread should go in the MPI_Bcast, if you want only ONE collective.
george.
I hope it is more clear, I apologize for not being clear in the first placeVincent Eugene Loh wrote:Vincent Rotival wrote:The solution I retained was for the main thread to isend data separately to each other threads that are using Irecv + loop on mpi_test to test the finish of the Irecv. It mught be dirty but works much better than using BcastThanks for the clarification.But this strikes me more as a question about the MPI standard than about the Open MPI implementation. That is, what you really want is for the MPI API to support a non-blocking form of collectives. You want control to return to the user program before the barrier/ bcast/etc. operation has completed. That's an API change._______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users_______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
smime.p7s
Description: S/MIME cryptographic signature