Dear all,
thanks a lot.

Diego



On Wed, 29 Aug 2018 at 00:13, Nathan Hjelm via users <
users@lists.open-mpi.org> wrote:

>
> Yup. That is the case for all composed datatype which is what the tuple
> types are. Predefined composed datatypes.
>
> -Nathan
>
> On Aug 28, 2018, at 02:35 PM, "Jeff Squyres (jsquyres) via users" <
> users@lists.open-mpi.org> wrote:
>
> I think Gilles is right: remember that datatypes like
> MPI_2DOUBLE_PRECISION are actually 2 values. So if you want to send 1 pair
> of double precision values with MPI_2DOUBLE_PRECISION, then your count is
> actually 1.
>
>
> On Aug 22, 2018, at 8:02 AM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
>
> Diego,
>
>
> Try calling allreduce with count=1
>
>
> Cheers,
>
>
> Gilles
>
>
> On Wednesday, August 22, 2018, Diego Avesani <diego.aves...@gmail.com>
> wrote:
>
> Dear all,
>
>
> I am going to start again the discussion about MPI_MAXLOC. We had one a
> couple of week before with George, Ray, Nathan, Jeff S, Jeff S., Gus.
>
>
> This because I have a problem. I have two groups and two communicators.
>
> The first one takes care of compute the maximum vale and to which
> processor it belongs:
>
>
> nPart = 100
>
>
> IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
>
>
> CALL MPI_ALLREDUCE( EFFMAX, EFFMAXW, 2, MPI_2DOUBLE_PRECISION, MPI_MAXLOC,
> MPI_MASTER_COMM,MPImaster%iErr )
>
> whosend = INT(EFFMAXW(2))
>
> gpeff = EFFMAXW(1)
>
> CALL
> MPI_BCAST(whosend,1,MPI_INTEGER,whosend,MPI_MASTER_COMM,MPImaster%iErr)
>
>
> ENDIF
>
>
> If I perform this, the program set to zero one variable, specifically
> nPart.
>
>
> if I print:
>
>
> IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
>
> WRITE(*,*) MPImaster%rank,nPart
>
> ELSE
>
> WRITE(*,*) MPIlocal%rank,nPart
>
> ENDIF
>
>
> I get;
>
>
> 1 2
>
> 1 2
>
> 3 2
>
> 3 2
>
> 2 2
>
> 2 2
>
> 1 2
>
> 1 2
>
> 3 2
>
> 3 2
>
> 2 2
>
> 2 2
>
>
>
> 1 0
>
> 1 0
>
> 0 0
>
> 0 0
>
>
> This seems some typical memory allocation problem.
>
>
> What do you think?
>
>
> Thanks for any kind of help.
>
>
>
>
>
> Diego
>
>
> _______________________________________________
>
> users mailing list
>
> users@lists.open-mpi.org
>
> https://lists.open-mpi.org/mailman/listinfo/users
>
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to