Thanks Dick, Eugene. That's what I figured. I was just hoping there might be some more obscure MPI functions that might do what I want. I'll go ahead and write my own yielding wrapper on irecv.
Thanks again, Brian sent from mobile phone On Oct 20, 2010 5:24 AM, "Richard Treumann" <treum...@us.ibm.com> wrote: Brian Most HPC applications are run with one processor and one working thread per MPI process. In this case, the node is not being used for other work so if the MPI process does release a processor, there is nothing else important for it to do anyway. In these applications, the blocking MPI call (like MPI_Recv) is issued only when there is no more computation that can be done until the MPI_Recv returns with with the message. Unless your application has other threads that can make valuable use of the processor freed up by making MPI_Recv do yields, the polling "overhead" is probably not something to worry about. If you do have other work available for the freed processor to turn to, the "problem" may be worth solving. MPI implementations, in general, default to a polling approach because it makes the MPI_Recv faster and if there is nothing else important for the processor to turn to, a fast MPI_Recv is what matters. Dick Treumann - MPI Team IBM Systems & Technology Group Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 From: Brian Budge <brian.bu...@gmail.com> To: Open MPI Users < us...@open-mpi.org> Date: 10/19/2010 09:47 PM Subject: [OMPI users] busy wait in MPI_Recv Sent by: users-boun...@open-mpi.org ------------------------------ Hi all - I just ran a small test to find out the overhead of an MPI_Recv call when no communication... _______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users _______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users