On Oct 31, 2007, at 1:18 AM, Murat Knecht wrote:
Yes I am, (master and child 1 running on the same machine).
But knowing the oversubscribing issue, I am using
mpi_yield_when_idle which should fix precisely this problem, right?
It won't *fix* the problem -- you're still oversubscribing the nodes,
so things will run slowly. But it should help, in that the processes
will yield regularly.
What version of OMPI are you using?
Or is the option ignored,when initially there is no second process?
No, the option should not be ignored.
I did give both machines multiple slots, so OpenMPI
"knows" that the possibility for more oversubscription may arise.
I'm not sure what you mean by this -- you should not "lie" to OMPI
and tell it that it has more slots than it physically does. But keep
in mind that, as I described in my first mail, OMPI does not
currently re-compute the number of processes on a host as you spawn
(which can lead to the oversubscription problem). If you're
explicitly setting yield_when_idle, that *may* help, but we may or
may not be explicitly propoagating that value to spawned
processes... I'll have to check.
Another possibility is that you might have something wrong in your
algorithm. E.g., did you ensure to set high/low in the
intercomm_merge properly?
You might want to attach to the "frozen" processes and see where
exactly they are stuck.
Confused,
Murat
Jeff Squyres schrieb:
Are you perchance oversubscribing your nodes? Open MPI does not
currently handle well when you initially undersubscribe your nodes
but then, due to spawning, oversubscribe your nodes. In this
case, OMPI will be aggressively polling in all processes, not
realizing that the node is now oversubscribed and it should be
yielding the processor so that other processes can run. On Oct 30,
2007, at 10:57 AM, Murat Knecht wrote:
Hi, does someone know whether there is a special requirement on
the order of spawning processes and the consequent merge of the
intercommunicators? I have two hosts, let's name them local and
remote, and a parent process on local that goes on spawning one
process on each one of the two nodes. After each spawn the parent
process and all existing childs participate in merging the
created Intercommunicator into an Intracommunicator that connects
- in the end - alls three processes. The weird thing is though,
when I spawn them in the order local, remote at the second, the
last spawn all three processes block when encountering MPI_Merge.
Though, when I switch the order around to spawning first the
process on remote and then on local, everything works out: The
two processes are spawned and the Intracommunicators created from
the Merge. Everything goes well, too, if I decide to spawn both
processes on either one of the machines. (The existing children
are informed via a message that they shall participate in the
Spawn and Merge since these are collective operations.) Is there
some implicit developer-level knowledge that explains why the
order defines the outcome? Logically, there ought to be no
difference. Btw, I work with two Linux nodes and an ordinary
Ethernet-TCP connection between them. Thanks, Murat
_______________________________________________ users mailing
list us...@open-mpi.org http://www.open-mpi.org/mailman/
listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Squyres
Cisco Systems