Dear developers of OpenMPI,

I am trying to run our parallelized Ftn-95 code on a Linux cluster with 
OpenMPI-1-10.0 and Intel-16.0.0 Fortran compiler.
In the code I use the  module MPI  ("use MPI"-stmts).

However I am not able to compile the code, because of compiler error messages 
like this:

/src_SPRAY/mpi_wrapper.f90(2065): error #6285: There is no matching specific 
subroutin for this generic subroutine call.   [MPI_REDUCE]


The problem seems for me to be this one:

The interfaces in the module MPI for the MPI-routines do not accept a send or 
receive buffer array, which is
actually a variable, an array element or a constant (like MPI_IN_PLACE).

Example 1:
     This does not work (gives the compiler error message:      error #6285: 
There is no matching specific subroutin for this generic subroutine call  )
             ivar=123    ! <-- ivar is an integer variable, not an array
          call MPI_BCAST( ivar, 1, MPI_INTEGER, 0, MPI_COMM_WORLD), ierr_mpi )  
  ! <--- this should work, but is not accepted by the compiler

      only this cumbersome workaround works:
              ivar=123
                allocate( iarr(1) )
                iarr(1) = ivar
         call MPI_BCAST( iarr, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr_mpi )    
! <--- this workaround works
                ivar = iarr(1)
                deallocate( iarr(1) )

Example 2:
     Any call of an MPI-routine with MPI_IN_PLACE does not work, like that 
coding:

      if(lmaster) then
        call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8, MPI_MAX &    
! <--- this should work, but is not accepted by the compiler
                                         ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
      else  ! slaves
        call MPI_REDUCE( rbuffarr, rdummyarr, nelem, MPI_REAL8, MPI_MAX &
                        ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
      endif

    This results in this compiler error message:

      /src_SPRAY/mpi_wrapper.f90(2122): error #6285: There is no matching 
specific subroutine for this generic subroutine call.   [MPI_REDUCE]
            call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8, MPI_MAX &
-------------^


In our code I observed the bug with MPI_BCAST, MPI_REDUCE, MPI_ALLREDUCE,
but probably there may be other MPI-routines with the same kind of bug.

This bug occurred for                               :     OpenMPI-1.10.0  with 
Intel-16.0.0
In contrast, this bug did NOT occur for:     OpenMPI-1.8.8    with Intel-16.0.0
                                                                            
OpenMPI-1.8.8    with Intel-15.0.3
                                                                            
OpenMPI-1.10.0  with gfortran-5.2.0

Greetings
Michael Rachner

Reply via email to