As far as I understand, the tcp btl is ok

Cheers,

Gilles

On Monday, April 25, 2016, dpchoudh . <dpcho...@gmail.com> wrote:

> Hello Gilles
>
> That idea crossed my mind as well, but I was under the impression that
> MPI_THREAD_MULTIPLE is not very well supported on OpenMPI? I believe it is
> not supported on OpenIB, but the original poster seems to be using TCP.
> Does it work for TCP?
>
> Thanks
> Durga
>
> 1% of the executables have 99% of CPU privilege!
> Userspace code! Unite!! Occupy the kernel!!!
>
> On Sun, Apr 24, 2016 at 10:48 AM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com
> <javascript:_e(%7B%7D,'cvml','gilles.gouaillar...@gmail.com');>> wrote:
>
>> an other option is to use MPI_THREAD_MULTIPLE, and MPI_Recv() on the
>> master task in a dedicated thread, and use a unique tag (or MPI_Comm_dup()
>> MPI_COMM_WORLD) to separate the traffic.
>>
>> If this is not the desired design, then the master task has to post
>> MPI_Irecv() and "poll" with MPI_Probe() / MPI_Test() and friends.
>> Note it is possible to use non blocking collective (MPI_Ibcast(),
>> MPI_Iscatter() and MPI_Igather()) and "both" collective and the progress
>> statuses
>>
>> Cheers,
>>
>> Gilles
>>
>> On Sunday, April 24, 2016, dpchoudh . <dpcho...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','dpcho...@gmail.com');>> wrote:
>>
>>> Hello
>>>
>>>
>>> I am not sure I am understanding your requirements correctly, but base
>>> on what I think it is, how about this: you do an MPI_Send() from all the
>>> non-root nodes to the root node and pack all the progress related data into
>>> this send. Use a special tag for this message to make it stand out from
>>> 'regular' sends. The root node does a non-blocking receive on this tag from
>>> all the nodes that it is expecting this message from.
>>>
>>> Would that work?
>>>
>>> Durga
>>>
>>>
>>> 1% of the executables have 99% of CPU privilege!
>>> Userspace code! Unite!! Occupy the kernel!!!
>>>
>>> On Sun, Apr 24, 2016 at 7:05 AM, MM <finjulh...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> With a miniature case of 3 linux quadcore boxes, linked via 1Gbit
>>>> Ethernet, I have a UI that runs on 1 of the 3 boxes, and that is the root
>>>> of the communicator.
>>>> I have a 1-second-running function on up to 10 parameters, my parameter
>>>> space fits in the memory of the root, the space of it is N ~~ 1 million.
>>>>
>>>> I use broadcast/scatter/gather to collect the value of my function on
>>>> each of the 1million points, dividing 1million by the number of nodes (11:
>>>> rootnode has 1 core/thread assigned to the UI, 1 core/thread for its
>>>> evaluation of the function on its own part of the parameter space and 2
>>>> other cores run non-root nodes, and the 2 other boxes all run non-root
>>>> nodes)
>>>>
>>>> the rootnode does:
>>>> 1. broadcast needed data
>>>> 2. scatter param space
>>>> 3. evaluate function locally
>>>> 4. gather results from this and all other nodes
>>>>
>>>> How would I go about having the non-root nodes send a sort of progress
>>>> status to the root node? that's used for plotting results on the root box
>>>> as soon as they are available?
>>>>
>>>> Rds,
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2016/04/29013.php
>>>>
>>>
>>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <javascript:_e(%7B%7D,'cvml','us...@open-mpi.org');>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/04/29015.php
>>
>
>

Reply via email to