Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-09-03 Thread Brian Budge
>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>> Brian >>>>>>>>>>>>>>>>>>> >>>>&g

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-09-03 Thread Ralph Castain
y of the >>>>>>>>>>>>>>>> orte >>>>>>>>>>>>>>>> server somewhere? >>>>>>>>>>>>>>>> >>>>>>>>>

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-31 Thread Brian Budge
t;>>>>>>>> MPI_Comm_get_parent(&parent); >>>>>>>>>>>>> >>>>>>>>>>>>> if(parent == MPI_COMM_NULL) { >>>>>>>>>>>>> std::cerr << "slave ha

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-31 Thread Ralph Castain
gt;>>>>>>>>> MPI_Finalize(); >>>>>>>>>>>> >>>>>>>>>>>> return 0; >>>>>>>>>>>> } >>>>>>>>>>>> >>>>&

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-31 Thread Brian Budge
t;>>>>>> wrote: >>>>>>>>>>>> It really is just that simple :-) >>>>>>>>>>>> >>>>>>>>>>>> On Aug 22, 2012, at 8:56 AM, Brian Budge >>>>>>>>>>

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-30 Thread Ralph Castain
.168.0.1, >>>>>>>>>>>> >>>>>>>>>>>> 0 > echo 192.168.0.11 > /tmp/hostfile >>>>>>>>>>>> 1 > echo 192.168.0.12 >> /tmp/hostfile >>>>>>>>>>>> 2 > export O

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-30 Thread Brian Budge
at a >>>>>>>>>>>>>> singleton could >>>>>>>>>>>>>> comm_spawn onto other nodes listed in a hostfile by setting an >>>>>>>>>>>>>> environmental >>>&g

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Brian Budge
t;>>>>>>>>> Sure, that's still true on all 1.3 or above releases. All you need >>>>>>>>>>> to do is set the hostfile envar so we pick it up: >>>>>>>>>>

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Ralph Castain
gt;>>>>>>>>>> On 1/4/08 5:10 AM, "Elena Zhebel" wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hello Ralph, >>>>>>>>>>>>> >>>>>>>>&

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Brian Budge
gt;>>>>> in an earlier message that went there. I had stated that a >>>>>>>>>>> singleton could >>>>>>>>>>> comm_spawn onto other nodes listed in a hostfile by setting an >>>>>&

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Ralph Castain
> octocore01 slots=8 max_slots=8 >>>>>>>>>>> octocore02 slots=8 max_slots=8 >>>>>>>>>>> clstr000 slots=2 max_slots=3 >>>>>>>>>>> clstr001 slots=2 max_slots=3 >>>>>>>>>

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Brian Budge
_rds_hostfile_path my_hostfile (I put it in >>>>>>>>>> .tcshrc and >>>>>>>>>> then source .tcshrc) >>>>>>>>>> - in my_master.cpp I did >>>>>>>>>> MPI_Info info1; >>>>>>&g

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Ralph Castain
;>> >>>>>>>>> For the case >>>>>>>>> mpirun -n 1 -hostfile my_hostfile -host my_master_host my_master.exe >>>>>>>>> everything works. >>>>>>>>> >>>>>>>>> Fo

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Brian Budge
.exe >>>>>>>> (i.e., >>>>>>>> you gave -n 1 to mpirun), then we will automatically map that process >>>>>>>> onto >>>>>>>> the first host in your hostfile. >>>>>>>> >>>>

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Ralph Castain
x_slots=3 >>>>>>> clstr003 slots=2 max_slots=3 >>>>>>> clstr004 slots=2 max_slots=3 >>>>>>> clstr005 slots=2 max_slots=3 >>>>>>> clstr006 slots=2 max_slots=3 >>>>>>> clstr007 slots=2 max_slots=3 >>>>

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-28 Thread Brian Budge
core02"; >>>>>> MPI_Info_set(info1, "host", hostname); >>>>>> >>>>>> _intercomm = intracomm.Spawn("./childexe", argv1, _nProc, info1, 0, >>>>>> MPI_ERRCODES_IGNORE); >>>>>> >>>&g

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-22 Thread Ralph Castain
: 1 >>>>> -- >>>>> Some of the requested hosts are not included in the current allocation for >>>>> the application: >>>>> ./childexe >>>>> The re

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-22 Thread Brian Budge
; ./childexe >>>> The requested hosts were: >>>> clstr002,clstr003,clstr005,clstr006,clstr007,octocore01,octocore02 >>>> >>>> Verify that you have mapped the allocated resources properly using the >>>> --host specification. >>>> ---

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-22 Thread Ralph Castain
resource in file >>> base/rmaps_base_support_fns.c at line 225 >>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file >>> rmaps_rr.c at line 478 >>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file >>> base/rmaps_ba

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2012-08-21 Thread Brian Budge
t; base/rmaps_base_map_job.c at line 210 >> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file >> rmgr_urm.c at line 372 >> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file >> communicator/comm_dyn.c at line 608 >> >> Did

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2008-01-04 Thread Ralph Castain
environment instead of on the > command line. Just set OMPI_MCA_rds_hostfile_path = my.hosts. > > You can then just run ./my_master.exe on the host where you want the master > to reside - everything should work the same. > > Just as an FYI: the name of that environmental variable is going to

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-18 Thread Ralph H Castain
lto:r...@lanl.gov] > Sent: Monday, December 17, 2007 5:49 PM > To: Open MPI Users ; Elena Zhebel > Cc: Ralph H Castain > Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration > > > > > On 12/17/07 8:19 AM, "Elena Zhebel" wrote: > >

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-17 Thread Ralph H Castain
e available in a future release - TBD. Hope that helps Ralph > > Thanks and regards, > Elena > > -Original Message- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On > Behalf Of Ralph H Castain > Sent: Monday, December 17, 2007 3:31 PM

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-17 Thread Elena Zhebel
en-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph H Castain Sent: Monday, December 17, 2007 3:31 PM To: Open MPI Users Cc: Ralph H Castain Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration On 12/12/07 5:46 AM, "Elena Zhebel" wrote: > >

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-17 Thread Ralph H Castain
On 12/12/07 5:46 AM, "Elena Zhebel" wrote: > > > Hello, > > > > I'm working on a MPI application where I'm using OpenMPI instead of MPICH. > > In my "master" program I call the function MPI::Intracomm::Spawn which spawns > "slave" processes. It is not clear for me how to spawn the

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-16 Thread Bruno Coutinho
Try using the info parameter in MPI::Intracomm::Spawn(). In this structure, you can say in which hosts you want to spawn. Info parameters for MPI spawn: http://www.mpi-forum.org/docs/mpi-20-html/node97.htm 2007/12/12, Elena Zhebel : > > Hello, > > I'm working on a MPI application where I'm usin