>>>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>>>> Brian
>>>>>>>>>>>>>>>>>>>
>>>>&g
y of the
>>>>>>>>>>>>>>>> orte
>>>>>>>>>>>>>>>> server somewhere?
>>>>>>>>>>>>>>>>
>>>>>>>>>
t;>>>>>>>> MPI_Comm_get_parent(&parent);
>>>>>>>>>>>>>
>>>>>>>>>>>>> if(parent == MPI_COMM_NULL) {
>>>>>>>>>>>>> std::cerr << "slave ha
gt;>>>>>>>>> MPI_Finalize();
>>>>>>>>>>>>
>>>>>>>>>>>> return 0;
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>&
t;>>>>>> wrote:
>>>>>>>>>>>> It really is just that simple :-)
>>>>>>>>>>>>
>>>>>>>>>>>> On Aug 22, 2012, at 8:56 AM, Brian Budge
>>>>>>>>>>
.168.0.1,
>>>>>>>>>>>>
>>>>>>>>>>>> 0 > echo 192.168.0.11 > /tmp/hostfile
>>>>>>>>>>>> 1 > echo 192.168.0.12 >> /tmp/hostfile
>>>>>>>>>>>> 2 > export O
at a
>>>>>>>>>>>>>> singleton could
>>>>>>>>>>>>>> comm_spawn onto other nodes listed in a hostfile by setting an
>>>>>>>>>>>>>> environmental
>>>&g
t;>>>>>>>>> Sure, that's still true on all 1.3 or above releases. All you need
>>>>>>>>>>> to do is set the hostfile envar so we pick it up:
>>>>>>>>>>
gt;>>>>>>>>>> On 1/4/08 5:10 AM, "Elena Zhebel" wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hello Ralph,
>>>>>>>>>>>>>
>>>>>>>>&
gt;>>>>> in an earlier message that went there. I had stated that a
>>>>>>>>>>> singleton could
>>>>>>>>>>> comm_spawn onto other nodes listed in a hostfile by setting an
>>>>>&
> octocore01 slots=8 max_slots=8
>>>>>>>>>>> octocore02 slots=8 max_slots=8
>>>>>>>>>>> clstr000 slots=2 max_slots=3
>>>>>>>>>>> clstr001 slots=2 max_slots=3
>>>>>>>>>
_rds_hostfile_path my_hostfile (I put it in
>>>>>>>>>> .tcshrc and
>>>>>>>>>> then source .tcshrc)
>>>>>>>>>> - in my_master.cpp I did
>>>>>>>>>> MPI_Info info1;
>>>>>>&g
;>>
>>>>>>>>> For the case
>>>>>>>>> mpirun -n 1 -hostfile my_hostfile -host my_master_host my_master.exe
>>>>>>>>> everything works.
>>>>>>>>>
>>>>>>>>> Fo
.exe
>>>>>>>> (i.e.,
>>>>>>>> you gave -n 1 to mpirun), then we will automatically map that process
>>>>>>>> onto
>>>>>>>> the first host in your hostfile.
>>>>>>>>
>>>>
x_slots=3
>>>>>>> clstr003 slots=2 max_slots=3
>>>>>>> clstr004 slots=2 max_slots=3
>>>>>>> clstr005 slots=2 max_slots=3
>>>>>>> clstr006 slots=2 max_slots=3
>>>>>>> clstr007 slots=2 max_slots=3
>>>>
core02";
>>>>>> MPI_Info_set(info1, "host", hostname);
>>>>>>
>>>>>> _intercomm = intracomm.Spawn("./childexe", argv1, _nProc, info1, 0,
>>>>>> MPI_ERRCODES_IGNORE);
>>>>>>
>>>&g
: 1
>>>>> --
>>>>> Some of the requested hosts are not included in the current allocation for
>>>>> the application:
>>>>> ./childexe
>>>>> The re
; ./childexe
>>>> The requested hosts were:
>>>> clstr002,clstr003,clstr005,clstr006,clstr007,octocore01,octocore02
>>>>
>>>> Verify that you have mapped the allocated resources properly using the
>>>> --host specification.
>>>> ---
resource in file
>>> base/rmaps_base_support_fns.c at line 225
>>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
>>> rmaps_rr.c at line 478
>>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
>>> base/rmaps_ba
t; base/rmaps_base_map_job.c at line 210
>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
>> rmgr_urm.c at line 372
>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
>> communicator/comm_dyn.c at line 608
>>
>> Did
environment instead of on the
> command line. Just set OMPI_MCA_rds_hostfile_path = my.hosts.
>
> You can then just run ./my_master.exe on the host where you want the master
> to reside - everything should work the same.
>
> Just as an FYI: the name of that environmental variable is going to
lto:r...@lanl.gov]
> Sent: Monday, December 17, 2007 5:49 PM
> To: Open MPI Users ; Elena Zhebel
> Cc: Ralph H Castain
> Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration
>
>
>
>
> On 12/17/07 8:19 AM, "Elena Zhebel" wrote:
>
>
e available in a future
release - TBD.
Hope that helps
Ralph
>
> Thanks and regards,
> Elena
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Ralph H Castain
> Sent: Monday, December 17, 2007 3:31 PM
en-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph H Castain
Sent: Monday, December 17, 2007 3:31 PM
To: Open MPI Users
Cc: Ralph H Castain
Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration
On 12/12/07 5:46 AM, "Elena Zhebel" wrote:
>
>
On 12/12/07 5:46 AM, "Elena Zhebel" wrote:
>
>
> Hello,
>
>
>
> I'm working on a MPI application where I'm using OpenMPI instead of MPICH.
>
> In my "master" program I call the function MPI::Intracomm::Spawn which spawns
> "slave" processes. It is not clear for me how to spawn the
Try using the info parameter in MPI::Intracomm::Spawn().
In this structure, you can say in which hosts you want to spawn.
Info parameters for MPI spawn:
http://www.mpi-forum.org/docs/mpi-20-html/node97.htm
2007/12/12, Elena Zhebel :
>
> Hello,
>
> I'm working on a MPI application where I'm usin
26 matches
Mail list logo