Diego,

Since this question is not Open MPI specific, Stack Overflow (or similar
forum) is a better place to ask.
Make sure you first read https://stackoverflow.com/help/mcve

Feel free to post us a link to your question.


Cheers,

Gilles

On Monday, August 13, 2018, Diego Avesani <diego.aves...@gmail.com> wrote:

> dear Jeff, dear all,
>
> its my fault.
>
> Can I send an attachment?
> thanks
>
> Diego
>
>
> On 13 August 2018 at 19:06, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
>
>> On Aug 12, 2018, at 2:18 PM, Diego Avesani <diego.aves...@gmail.com>
>> wrote:
>> >
>> > Dear all, Dear Jeff,
>> > I have three communicator:
>> >
>> > the standard one:
>> > MPI_COMM_WORLD
>> >
>> > and other two:
>> > MPI_LOCAL_COMM
>> > MPI_MASTER_COMM
>> >
>> > a sort of two-level MPI.
>> >
>> > Suppose to have 8 threats,
>> > I use 4 threats for run the same problem with different value. These
>> are the LOCAL_COMM.
>> > In addition I have a MPI_MASTER_COMM to allow the master of each group
>> to communicate.
>>
>> I don't understand what you're trying to convey here, sorry.  Can you
>> draw it, perhaps?
>>
>> (I am assuming you mean "threads", not "threats")
>>
>> > These give me some problem.
>> >
>> >
>> > For example, I have to exit to a cycle, according to a check:
>> >
>> > IF(counter.GE.npercstop*nParticles)THEN
>> >         flag2exit=1
>> >         WRITE(*,*) '-Warning PSO has been exit'
>> >         EXIT pso_cycle
>> >      ENDIF
>> >
>> > But this is difficult to do since I have to exit only after all the
>> threats inside a set have finish their task.
>> >
>> > Do you have some suggestions?
>> > Do you need other information?
>>
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>>
>>
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to