Brian Most HPC applications are run with one processor and one working thread per MPI process. In this case, the node is not being used for other work so if the MPI process does release a processor, there is nothing else important for it to do anyway.
In these applications, the blocking MPI call (like MPI_Recv) is issued only when there is no more computation that can be done until the MPI_Recv returns with with the message. Unless your application has other threads that can make valuable use of the processor freed up by making MPI_Recv do yields, the polling "overhead" is probably not something to worry about. If you do have other work available for the freed processor to turn to, the "problem" may be worth solving. MPI implementations, in general, default to a polling approach because it makes the MPI_Recv faster and if there is nothing else important for the processor to turn to, a fast MPI_Recv is what matters. Dick Treumann - MPI Team IBM Systems & Technology Group Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 Tele (845) 433-7846 Fax (845) 433-8363 From: Brian Budge <brian.bu...@gmail.com> To: Open MPI Users <us...@open-mpi.org> List-Post: users@lists.open-mpi.org Date: 10/19/2010 09:47 PM Subject: [OMPI users] busy wait in MPI_Recv Sent by: users-boun...@open-mpi.org Hi all - I just ran a small test to find out the overhead of an MPI_Recv call when no communication is occurring. It seems quite high. I noticed during my google excursions that openmpi does busy waiting. I also noticed that the option to -mca mpi_yield_when_idle seems not to help much (in fact, turning on the yield seems only to slow down the program). What is the best way to reduce this polling cost during low-communication invervals? Should I write my own recv loop that sleeps for short periods? I don't want to go write someone that is possibly already done much better in the library :) Thanks, Brian _______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users