We already have --pernode, which spawns one process per node. You can also launch one process/slot across all available slots by simply not specifying the number of processes.
I gather this option would say "spawn N procs/node across all nodes" - I can understand the possible usefulness, but I'm not sure when we would get to it. Also, it isn't clear from the discussion how it differs from our "spawn one proc/slot" option - unless you either (a) don't want to use all the available processors, or (b) want to oversubscribe the nodes. Are either of those something people really would want to do on a frequent enough basis to justify another command line option? Just asking for clarity - I don't have any a priori opposition to the notion. Ralph On 11/28/06 3:04 PM, "Maestas, Christopher Daniel" <cdma...@sandia.gov> wrote: > I recently saw this on the mpiexec mailing list and pondered that this > would be a useful feature for Open MPI as well. :-) > I can't seem to enter a trac ticket and seem to be having issues w/ my > browser at the moment, but wanted to get this out there. > --- >>>> 1) mpiexec already has "-pernode" but thinking of n-way nodes with > >>>> dual-core CPUs, a switch like "-Npernode <n>" might be very useful > >>>> (and probably easy to implement, i.e. in get_hosts.c one probably >>>> only would have to set nodes[i].availcpu to the correct n) >>> >>> This sounds like a good suggestion, and pretty easy to implement in >>> constrain_nodes() along with how -pernode is implemented. I'll >>> stick it in the tree if you code it up (with manpage entry too). >>> >> >> please find attached a patch with my implementation of the feature; >> "-npernode <nprocs>" is added as new command line feature; >> constrain_nodes tries to be smart if different numbers of CPUs are >> available on the nodes and takes the minimum of available CPUs and >> requested number of processes per node ... > > Thanks! I checked it in and tested it lightly. With the extra > infrastructure for tracking individual node ids that was added earlier > today, the bit that does the constraining wiggled around some. > Hopefully it's clearer this way, since we have to use a loop now anyway. > > http://svn.osc.edu/browse/mpiexec/trunk/get_hosts.c?r1=390&r2=392&view=p > atch > > ---- > > And then -pernode would default to -npernode 1 :-) > > -cdm > > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users