Hi,
Testing a distributed system locally, I couldn't help but notice that a
blocking MPI_Recv causes 100% CPU load. I deactivated (at both compile-
and run-time) the shared memory bt-layer, and specified "tcp, self" to
be used. Still one core busy. Even on a distributed system I intend to
perform work, while waiting for incoming requests. For this purpose
having one core busy waiting for requests is uncomfortable to say the
least. Does OpenMPI not use some blocking system call to a tcp port
internally? Since i deactivated the understandably costly shared-memory
waits, this seems weird to me.
Someone has an explanation or even better a fix / workaround / solution ?
thanks,
Murat