Hello Ralph, This is great news! Thanks for doing this. I will try and get around to it soon before the holiday break.
The allocation scheme always seems to get to me. From what you describe that is how I would have seen it. As I've gotten to know osc mpiexec through the years I think they like to do a first fit approach, but now that I test it I think the feature needs more testing or I'm not testing appropriately :-) --- $ /apps/x86_64/system/mpiexec-0.82/bin/mpiexec -comm=none -npernode 2 grep HOSTNAME /etc/sysconfig/network HOSTNAME="an56" HOSTNAME="an56" HOSTNAME="an55" HOSTNAME="an53" HOSTNAME="an54" HOSTNAME="an55" HOSTNAME="an53" HOSTNAME="an54" --- I guess I would wonder if it would be possible to switch from the method what you suggest and also allow a "by-X-slot" style of launch where you would see for npernode = X and N nodes: proc1 - node1 proc2 - node1 ... proc(X*1) - node1 ... proc(X+1) - node2 proc(X+2) - node2 ... proc(X*2) - node2 ... proc(N*X-(X-0)) - nodeN proc(N*X-(X-1)) - nodeN ... proc(X*N-1) - nodeN proc(X*N) - nodeN I think that's how to best describe it. Basically you load until there are X processes on each node before moving to the next. This may prove to be more challenging, and I can understand if it would not be deemed "worthy." :-) -cdm > -----Original Message----- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain > Sent: Monday, December 11, 2006 5:41 PM > To: Open MPI Users > Subject: Re: [OMPI users] Pernode request > > Hi Chris > > Okay, we have modified the pernode behavior as you requested > (on the trunk > as of r12821)- give it a shot and see if that does it. I have > not yet added > the npernode option, but hope to get that soon. > > I have a question for you about the npernode option. I am > assuming that you > want n procs/node, but that you want it mapped by NODE. For > example, proc 0 > goes on the first node, proc 1 goes on the second node, etc. > until I get one > on each node; then I wrap back to the beginning and do this > again until I > get the specified number of procs on each node. > > Correct? > Ralph > > >Ralph, > > >I agree with what you stated in points 1-4. That is what we > are looking > >for. > >I understand your point now about the non-MPI users too. :-) > > > >Thanks, > >-cdm > > >>-----Original Message----- > >>From: users-bounces_at_[hidden] > [mailto:users-bounces_at_[hidden]] On > >>Behalf Of Ralph Castain > >>Sent: Wednesday, November 29, 2006 8:01 AM > >>To: Open MPI Users > >>Subject: Re: [OMPI users] Pernode request > >> > >>Hi Chris > >> > >>Thanks for the patience and the clarification - much appreciated. In > >>fact, I have someone that needs to learn more about the > code base, so I > >>think I will assign this to him. At the least, he will have > to learn a > >>lot more about the mapper! > >> > >>I have no problem with modifying the pernode behavior to > deal with the > >>case of someone specifying -np as you describe. It would be > relatively > >>easy to check. As I understand it, you want the behavior to be: > >> > >>1. if no -np is specified, launch one proc/node across > entire allocation > >> > >>2. if -np n is specified AND n is less than the number of allocated > >>nodes, then launch one proc/node up to the specified > number. Of course, > >>this is identical to just doing -np n -bynode, but that's > immaterial. > >>;-) > >> > >>3. if -np n is specified AND n is greater than the number > of allocated > >>nodes, error message and exit > >> > >>4. add a -npernode n option that launches n procs/node, > subject to the > >>same tests above. > >> > >>Can you confirm? > >> > >>Finally, I think you misunderstood my comment about the MPI > folks. Our > >>non-MPI users couldn't care less about commonality of command line > >>arguments across MPI implementations. Hence, I leave issues > in that area > >>to the MPI members of the team - they are the ones that > decide how to > >>deal with the myriad of different option syntaxes in the > MPI community. > >> > >>Gives me too much of a headache! :-) > >> > >>Ralph > > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >