Dear George, Dear all,
I have just rewritten the code to make it more clear:
* INTEGER :: colorl,colorglobal*
* INTEGER :: LOCAL_COMM,MASTER_COMM*
* !*
* !---*
* ! create WORLD comm
I am using Open MPI 2.1.1 along with Intel Fortran 17 update 4 and I am
experiencing what I think is a memory leak with a job that uses 184 MPI
processes. The memory used per process appears to be increasing by about 1 to 2
percent per hour. My code uses mostly persistent sends and receives to e
Dear George, Dear all,
here the code:
PROGRAM TEST
USE MPI
IMPLICIT NONE
! mpif90 -r8 *.f90
!
INTEGER :: rank
INTEGER :: subrank,leader_rank
INTEGER :: nCPU
INTEGER :: subnCPU
INTEGER :: ierror
INTEGER :: tag
I guess the second comm_rank call is invalid on all non-leader processes,
as their LEADER_COMM communicator is MPI_COMM_NULL.
george
On Fri, Jul 28, 2017 at 05:06 Diego Avesani wrote:
> Dear George, Dear all,
>
> thanks, thanks a lot. I will tell you everything.
> I will try also to implement y
Dear George, Dear all,
thanks, thanks a lot. I will tell you everything.
I will try also to implement your suggestion.
Unfortunately, the program that I have show to you is not working. I get
the following error:
[] *** An error occurred in MPI_Comm_rank
[] *** reported by process [643497985,7]