Ralph,

Thanks for the feedback.  Glad we are clearing these things up. :-)

So here's what osc mpiexec is doing now:
---
  -pernode : allocate only one process per compute node
  -npernode <nprocs> : allocate no more than <nprocs> processes per
compute node
---

> Cdm> I think I originally requested the -pernode feature. :-)  I've 
> Cdm> seen
> one issue I know of ...
> When used in the following manner:
> ---
> "mpiexec -pernode -np N" and if N is > allocated nodes, it should 
> error out and not proceed.  I need to update/learn to update the trac 
> ticket on this.
> ---
This is an incorrect usage - the "pernode" option is only intended to be
used without any specification of the number of processes. Pernode
instructs the system to spawn one process/node across the entire
allocation - we simply ignore any attempt to indicate the number of
processes.

I suppose I could check and error out if you specify the number of procs
AND --pernode. I would have to check the code, to be honest - I may
already be doing so. Just don't remember :-)

CDM> I think I remember looking through the code here, and thinking that
app->num_procs needed to be compared to the
opal_list_get_size(&master_node_list) given, but didn't dig into how
that got set -np when specified.  My intention is that we could do for
an N node job scheduled allocation such as torque:
---
"mpiexec -pernode myprogram"
"mpiexec -np n -pernode myprogram" where n <= N
---
That is how I believe osc mpiexec is behaving.  I tested on 2 nodes and
saw this:
---
$ mpiexec -comm=none -pernode hostname
rv272
rv270
$ mpiexec -comm=none -pernode -np 1 hostname
rv272
$ mpiexec -comm=none -pernode -np 3 hostname
mpiexec: Error: constrain_nodes: argument -n specifies 3 processors, but
  only 2 available after processing -pernode flag.
---
This was my original intention of requesting -pernode in the first
place.  I apologize if I didn't provide this example as well.  :-)
Outside of a job scheduler I would think in that cause you would have
the following to launch commands do the same thing:

"mpiexec -np n -pernode --machinefile=m myprogram "
"mpiexec -np n --bynode --machinefile=m"
"OMPI_MCA_rmaps_base_schedule_policy=node; mpiexec -np n --machinefile=m
myprogram"

Here the nodes N is basically "wc -l" of the file m and n <=N still
holds true when using -pernode.  It may prove difficult to check if
using -pernode, and check in w/ all orted's to see if they've already
launched the process on their nodes.  I think that would have to be done
if using -pernode though.  If it's too difficult simply error'ng out
when using -pernode and -np may be an easier choice for now. :-)  

...

> Cdm> it was more my hope that OSC mpiexec and Open MPI mpiexec options
> would eventually merge into common options.  A guy can dream can't he?
> ;-)

Guess that's up to the MPI folks on the team - from an RTE perspective,
I don't have an opinion either way.

CDM> I would think that the mpiexec/orterun feature set could launch
other types of executables that weren't mpi based, so I wouldn't think
it would be just mpi spawning. :-)
For example:
        "mpiexec -pernode hostname"

Thanks,
-cdm


Reply via email to