It is a little bit of both:

* historical, because most MPI's default to mapping by slot, and

* performance, because procs that share a node can communicate via shared memory, which is faster than sending messages over an interconnect, and most apps are communication-bound

If your app is disk-intensive, then mapping it -bynode may be a better option for you. That's why we provide it. Note, however, that you can still wind up with multiple procs on a node. All "bynode" means is that the ranks are numbered consecutively bynode - it doesn't mean that there is only one proc/node.

If you truly want one proc/node, then you should use the -pernode option. This maps one proc on each node up to either the number of procs you specified or the number of available nodes. If you don't specify -np, we just put one proc on each node in your allocation/ hostfile.

HTH
Ralph

On Feb 20, 2009, at 1:25 AM, Raymond Wan wrote:


Hi all,

According to FAQ 14 (How do I control how my processes are scheduled across nodes?) [http://www.open-mpi.org/faq/?category=running#mpirun-scheduling ], it says that the default scheduling policy is by slot and not by node. I'm curious why the default is "by slot" since I am thinking of explicitly specifying by node but I'm wondering if there is an issue which I haven't considered. I would think that one reason for "by node" is to distribute HDD access across machines [as is the case for me since my program is HDD access intensive]. Or perhaps I am mistaken? I'm now thinking that "by slot" is the default because processes with ranks that are close together might do similar tasks and you would want them on the same node? Is that the reason?

Also, at the end of this FAQ, it says "NOTE: This is the scheduling policy in Open MPI because of a long historical precendent..." -- does this "This" refer to "the fact that there are two scheduling policies" or "the fact that 'by slot' is the default"? If the latter, then that explains why "by slot" is the default, I guess...

Thank you!

Ray



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to