You should look at these two FAQ entries:

http://www.open-mpi.org/faq/?category=running#oversubscribing
http://www.open-mpi.org/faq/?category=running#force-aggressive-degraded

To get what you want, you need to force Open MPI to yield the processor rather
than be aggressively waiting for a message.

On 10/23/07, Murat Knecht <murat.kne...@student.hpi.uni-potsdam.de> wrote:
> Hi,
> Testing a distributed system locally, I couldn't help but notice that a
> blocking MPI_Recv causes 100% CPU load. I deactivated (at both compile-
> and run-time) the shared memory bt-layer, and specified "tcp, self" to
> be used. Still one core busy. Even on a distributed system I intend to
> perform work, while waiting for incoming requests. For this purpose
> having one core busy waiting for requests is uncomfortable to say the
> least. Does OpenMPI not use some blocking system call to a tcp port
> internally? Since i deactivated the understandably costly shared-memory
> waits, this seems weird to me.
> Someone has an explanation or even better a fix / workaround / solution ?
> thanks,
> Murat
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


-- 
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
 tmat...@gmail.com || timat...@open-mpi.org
    I'm a bright... http://www.the-brights.net/

Reply via email to