Diego,

Try calling allreduce with count=1

Cheers,

Gilles

On Wednesday, August 22, 2018, Diego Avesani <diego.aves...@gmail.com>
wrote:

> Dear all,
>
> I am going to start again the discussion about MPI_MAXLOC. We had one a
> couple of week before with George, Ray, Nathan, Jeff S, Jeff S., Gus.
>
> This because I have a problem. I have two groups and two communicators.
> The first one takes care of compute the maximum vale and to which
> processor it belongs:
>
> nPart = 100
>
> IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
>
> CALL MPI_ALLREDUCE( EFFMAX, EFFMAXW, 2, MPI_2DOUBLE_PRECISION, MPI_MAXLOC,
> MPI_MASTER_COMM,MPImaster%iErr )
> whosend = INT(EFFMAXW(2))
> gpeff   = EFFMAXW(1)
> CALL MPI_BCAST(whosend,1,MPI_INTEGER,whosend,MPI_MASTER_
> COMM,MPImaster%iErr)
>
> ENDIF
>
> If I perform this,  the program set to zero one variable, specifically
> nPart.
>
> if I print:
>
>      IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
>           WRITE(*,*) MPImaster%rank,nPart
>      ELSE
>           WRITE(*,*) MPIlocal%rank,nPart
>      ENDIF
>
> I get;
>
> 1 2
> 1 2
> 3 2
> 3 2
> 2 2
> 2 2
> 1 2
> 1 2
> 3 2
> 3 2
> 2 2
> 2 2
>
>
> 1 0
> 1 0
> 0 0
> 0 0
>
> This seems some typical memory allocation problem.
>
> What do you think?
>
> Thanks for any kind of help.
>
>
>
>
> Diego
>
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to