Dear R, Dear all,

I do not know.
I have isolated the issues. It seem that I have some problem with:
  CALL
MPI_COMM_SPLIT(MPI_COMM_WORLD,colorl,MPIworld%rank,MPI_LOCAL_COMM,MPIworld%iErr)
  CALL MPI_COMM_RANK(MPI_LOCAL_COMM, MPIlocal%rank,MPIlocal%iErr)
  CALL MPI_COMM_SIZE(MPI_LOCAL_COMM, MPIlocal%nCPU,MPIlocal%iErr)

openMPI seems not able to create properly MPIlocal%rank.

what should be? a bug?

thanks again

Diego


On 3 August 2018 at 19:47, Ralph H Castain <r...@open-mpi.org> wrote:

> Those two command lines look exactly the same to me - what am I missing?
>
>
> On Aug 3, 2018, at 10:23 AM, Diego Avesani <diego.aves...@gmail.com>
> wrote:
>
> Dear all,
>
> I am experiencing a strange error.
>
> In my code I use three group communications:
> MPI_COMM_WORLD
> MPI_MASTERS_COMM
> LOCAL_COMM
>
> which have in common some CPUs.
>
> when I run my code as
>  mpirun -np 4 --oversubscribe ./MPIHyperStrem
>
> I have no problem, while when I run it as
>
>  mpirun -np 4 --oversubscribe ./MPIHyperStrem
>
> sometimes it crushes and sometimes not.
>
> It seems that all is linked to
> CALL MPI_REDUCE(QTS(tstep,:), QTS(tstep,:), nNode, MPI_DOUBLE_PRECISION,
> MPI_SUM, 0, MPI_LOCAL_COMM, iErr)
>
> which works with in local.
>
> What do you think? Can you please suggestion some debug test?
> Is a problem related to local communications?
>
> Thanks
>
>
>
> Diego
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to