Thanks for the feedback Ralph.  Comments below. 

-----Original Message-----
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent: Tuesday, November 28, 2006 3:27 PM
To: Open MPI Users
Subject: Re: [OMPI users] Pernode request

We already have --pernode, which spawns one process per node. You can
also launch one process/slot across all available slots by simply not
specifying the number of processes.

Cdm> I think I originally requested the -pernode feature. :-)  I've seen
one issue I know of ...
When used in the following manner:
---
"mpiexec -pernode -np N" and if N is > allocated nodes, it should error
out and not proceed.  I need to update/learn to update the trac ticket
on this.
---

I gather this option would say "spawn N procs/node across all nodes" - I
can understand the possible usefulness, but I'm not sure when we would
get to it. Also, it isn't clear from the discussion how it differs from
our "spawn one proc/slot" option - unless you either (a) don't want to
use all the available processors, or (b) want to oversubscribe the
nodes. Are either of those something people really would want to do on a
frequent enough basis to justify another command line option?

Cdm> I was only thinking as we see dual/quad core processors on nodes
where this would be helpful.  Something I would see wanting to do in a
scaling/profiling study with this is hitting on (a) since we tend to do
that to find out sweet spots and get measurements.  Although I can see
the case for (b) to easily oversubscribe by some extra count and see
what happens there too.  This and the -pernode feature I think make it
easy not to have to keep count on the -np when you allocate N nodes.
You can just run it on your allocated set.  We tend to submit 1000s of
jobs running varying benchmarks and parsing the output and do have users
wanting to allocate on a pernode basis without the worry of the -np
based on their N size.

Just asking for clarity - I don't have any a priori opposition to the
notion.

Cdm> it was more my hope that OSC mpiexec and Open MPI mpiexec options
would eventually merge into common options.  A guy can dream can't he?
;-)


-cdm


Reply via email to