Hi Thatyene and Ralph.

Now I got the solution and it worked fine. I did not try spawn_multiple
because I've read the same documentation....

Thanks so much!

On Tue, May 17, 2011 at 5:13 PM, Ralph Castain <r...@open-mpi.org> wrote:

> <laugh> Thanks for pointing this out - it's an error in our man page. I've
> fixed it on our devel trunk and will get it push'd to the release.
>
>
> On May 16, 2011, at 1:14 PM, Thatyene Louise Alves de Souza Ramos wrote:
>
> Ralph, thank you the reply.
>
> I just try what you said and it works! I didn't think to try the array of
> info arguments because in the spawn_multiple documentation I read the
> follow:
>
> "... *array_of_info*, is an array of *info *arguments; however, *only the
> first argument in that array is used. Any subsequent arguments in the array
> are ignored* because an *info* argument applies to the entire job that is
> spawned, and cannot be different for each executable in the job. See the
> INFO ARGUMENTS section for more information."
>
> Anyway, I'm glad it works!
>
> Thank you very much!
>
> Regards.
>
> Thatyene Ramos
>
> On Mon, May 16, 2011 at 3:47 PM, Ralph Castain <r...@open-mpi.org> wrote:
>
>> You need to use MPI_Comm_spawn_multiple. Despite the name, it results in a
>> single communicator being created by a single launch - it just allows you to
>> specify multiple applications to run.
>>
>> In this case, we use the same app, but give each element a different
>> "host" info key to get the behavior we want. Looks something like this:
>>
>>     MPI_Comm child;
>>     char *cmds[3] = {"myapp", "myapp", "myapp"};
>>     MPI_Info info[3];
>>     int maxprocs[] = { 1, 3, 1 };
>>
>>   MPI_Info_create(&info[0]);
>>   MPI_Info_set(info[0], "host", "m1");
>>
>>   MPI_Info_create(&info[1]);
>>   MPI_Info_set(info[1], "host", "m2");
>>
>>   MPI_Info_create(&info[2]);
>>   MPI_Info_set(info[2], "host", "m1");
>>
>>         MPI_Comm_spawn_multiple(3, cmds, NULL, maxprocs,
>>                                 info, 0, MPI_COMM_WORLD,
>>                                 &child, MPI_ERRCODES_IGNORE);
>>
>> I won't claim the above is correct - but it gives the gist of the idea.
>>
>>
>> On May 16, 2011, at 12:19 PM, Thatyene Louise Alves de Souza Ramos wrote:
>>
>> Ralph,
>>
>> I have the same issue and I've been searching how to do this, but I
>> couldn't find.
>>
>> What exactly must be the string in the host info key to do what Rodrigo
>> described?
>>
>> <<< Inside your master, you would create an MPI_Info key "host" that has
>> a value
>> <<< consisting of a string "host1,host2,host3" identifying the hosts you
>> want
>> <<< your slave to execute upon. Those hosts must have been included in
>> <<< my_hostfile. Include that key in the MPI_Info array passed to your
>> Spawn.
>>
>> I tried to do what you said above but ompi ignores the repetition of
>> hosts. Using Rodrigo's example I did:
>>
>> host info key = "m1,m2,m2,m2,m3" and number of processes = 5 and the
>> result was
>>
>> m1 -> 2
>> m2 -> 2
>> m3 -> 1
>>
>> and not
>>
>> m1 -> 1
>> m2 -> 3
>> m3 -> 1
>>
>> as I wanted.
>>
>> Thanks in advance.
>>
>> Thatyene Ramos
>>
>> On Fri, May 13, 2011 at 9:16 PM, Ralph Castain <r...@open-mpi.org> wrote:
>>
>>> I believe I answered that question. You can use the hostfile info key, or
>>> you can use the host info key - either one will do what you require.
>>>
>>> On May 13, 2011, at 4:11 PM, Rodrigo Silva Oliveira wrote:
>>>
>>> Hi,
>>>
>>> I think I was not specific enough. I need to spawn the copies of a
>>> process in a unique mpi_spawn call. It is, I have to specify a list of
>>> machines and how many copies of the process will be spawned on each one. Is
>>> it possible?
>>>
>>> I would be something like that:
>>>
>>> machines     #copies
>>> m1                1
>>> m2                3
>>> m3                1
>>>
>>> After an unique call to spawn, I want the copies running in this fashion.
>>> I tried use a hostfile with the option slot, but I'm not sure if it is the
>>> best way.
>>>
>>> hostfile:
>>>
>>> m1 slots=1
>>> m2 slots=3
>>> m3 slots=1
>>>
>>> Thanks
>>>
>>> --
>>> Rodrigo Silva Oliveira
>>> M.Sc. Student - Computer Science
>>> Universidade Federal de Minas Gerais
>>> www.dcc.ufmg.br/~rsilva <http://www.dcc.ufmg.br/%7Ersilva>
>>>  _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
Rodrigo Silva Oliveira
M.Sc. Student - Computer Science
Universidade Federal de Minas Gerais
www.dcc.ufmg.br/~rsilva <http://www.dcc.ufmg.br/%7Ersilva>

Reply via email to