Thank you for the explanation! I understand what is going on now: there
is a process list for each node whose order is dependent on the mapping
policy, and the ranker, when using "slot," walks through that list.
Makes sense.
Thank you again!
David
On 11/30/2016 04:46 PM, r...@open-mpi.org wro
“slot’ never became equivalent to “socket”, or to “core”. Here is what happened:
*for your first example: the mapper assigns the first process to the first node
because there is a free core there, and you said to map-by core. It goes on to
assign the second process to the second core, and the th
Hello Ralph,
I do understand that "slot" is an abstract term and isn't tied down to
any particular piece of hardware. What I am trying to understand is how
"slot" came to be equivalent to "socket" in my second and third example,
but "core" in my first example. As far as I can tell, MPI ranks s
I think you have confused “slot” with a physical “core”. The two have
absolutely nothing to do with each other.
A “slot” is nothing more than a scheduling entry in which a process can be
placed. So when you --rank-by slot, the ranks are assigned round-robin by
scheduler entry - i.e., you assign
Well, jtull over at PGI seemed to have the "magic sauce":
http://www.pgroup.com/userforum/viewtopic.php?p=21105#21105
Namely, I think it's the siterc file. I'm not sure which of the adaptations
fixes the issue yet, though.
On Mon, Nov 28, 2016 at 3:11 PM, Jeff Hammond
wrote:
> attached config.
Hello All,
The man page for mpirun says that the default ranking procedure is
round-robin by slot. It doesn't seem to be that straight-forward to me,
though, and I wanted to ask about the behavior.
To help illustrate my confusion, here are a few examples where the
ranking behavior changed ba