dear all,
"allreduce should be in MPI_COMM_WORLD"
I think that you have find the problem.
However, in my original code, the counter information belongs only to the
master group.
should I share that information with the slaves of each masters?
thanks again
Diego
On 20 August 2018 at 09:17, G
Diego,
first, try using MPI_IN_PLACE when sendbuffer and recvbuffer are identical
at first glance, the second allreduce should be in MPI_COMM_WORLD (with
counter=0 when master_comm is null),
or you have to add an extra broadcast in local_comm
Cheers,
Gilles
On 8/20/2018 3:56 PM, Diego
Dear George, Dear Gilles, Dear Jeff, Deal all,
Thank for all the suggestions.
The problem is that I do not want to FINALIZE, but only to exit from a
cycle.
This is my code:
I have:
master_group;
each master sends to its slaves only some values;
the slaves perform something;
according to a counter,
> On Aug 12, 2018, at 2:18 PM, Diego Avesani
> wrote:
> >
> > For example, I have to exit to a cycle, according to a
> check:
> >
> > IF(counter.GE.npercstop*nParticles)THEN
> > flag2exit=1
> > WRITE(*,*)
Diego,
Since this question is not Open MPI specific, Stack Overflow (or similar
forum) is a better place to ask.
Make sure you first read https://stackoverflow.com/help/mcve
Feel free to post us a link to your question.
Cheers,
Gilles
On Monday, August 13, 2018, Diego Avesani wrote:
> dear
dear Jeff, dear all,
its my fault.
Can I send an attachment?
thanks
Diego
On 13 August 2018 at 19:06, Jeff Squyres (jsquyres)
wrote:
> On Aug 12, 2018, at 2:18 PM, Diego Avesani
> wrote:
> >
> > Dear all, Dear Jeff,
> > I have three communicator:
> >
> > the standard one:
> > MPI_COMM_WORLD
On Aug 12, 2018, at 2:18 PM, Diego Avesani wrote:
>
> Dear all, Dear Jeff,
> I have three communicator:
>
> the standard one:
> MPI_COMM_WORLD
>
> and other two:
> MPI_LOCAL_COMM
> MPI_MASTER_COMM
>
> a sort of two-level MPI.
>
> Suppose to have 8 threats,
> I use 4 threats for run the same p
Dear all, Dear Jeff,
I have three communicator:
the standard one:
MPI_COMM_WORLD
and other two:
MPI_LOCAL_COMM
MPI_MASTER_COMM
a sort of two-level MPI.
Suppose to have 8 threats,
I use 4 threats for run the same problem with different value. These are
the LOCAL_COMM.
In addition I have a MPI_MA
On Aug 10, 2018, at 6:27 PM, Diego Avesani wrote:
>
> The question is:
> Is it possible to have a barrier for all CPUs despite they belong to
> different group?
> If the answer is yes I will go in more details.
By "CPUs", I assume you mean "MPI processes", right? (i.e., not threads inside
an
Dear Jeff,
you are right.
The question is:
Is it possible to have a barrier for all CPUs despite they belong to
different group?
If the answer is yes I will go in more details.
Thank a lot
Diego
On 10 August 2018 at 19:49, Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
>
I'm not quite clear what the problem is that you're running in to -- you just
said that there is "some problem with MPI_barrier".
What problem, exactly, is happening with your code? Be as precise and specific
as possible.
It's kinda hard to tell what is happening in the code snippet below beca
11 matches
Mail list logo