Michael,

I remember i saw similar reports.

Could you give a try to the latest v1.10.1 ?
And if that still does not work, can you upgrade icc suite and give it an other 
try ?

I cannot remember whether this is an ifort bug or the way ompi uses fortran...

Btw, any reason why you do not
Use mpi_f08 ?

HTH

Gilles

michael.rach...@dlr.de wrote:
><!-- /* Font Definitions */ @font-face         {font-family:Wingdings;         
>panose-1:5 0 0 0 0 0 0 0 0 0;} @font-face       {font-family:Wingdings;        
> panose-1:5 0 0 0 0 0 0 0 0 0;} @font-face       {font-family:Calibri;   
>panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, 
>li.MsoNormal, div.MsoNormal        {margin:0cm;    margin-bottom:.0001pt;  
>font-size:11.0pt;       font-family:"Calibri","sans-serif";     
>mso-fareast-language:EN-US;} a:link, span.MsoHyperlink  
>{mso-style-priority:99;         color:blue;     text-decoration:underline;} 
>a:visited, span.MsoHyperlinkFollowed        {mso-style-priority:99;         
>color:purple;   text-decoration:underline;} span.E-MailFormatvorlage17  
>{mso-style-type:personal-compose;       font-family:"Calibri","sans-serif";    
> color:windowtext;} .MsoChpDefault       {mso-style-type:export-only;    
>font-size:10.0pt;       mso-fareast-language:EN-US;} @page WordSection1        
> {size:612.0pt 792.0pt;  margin:70.85pt 70.85pt 2.0cm 70.85pt;} 
>div.WordSection1         {page:WordSection1;} --> 
>
>Dear developers of OpenMPI,
>
> 
>
>I am trying to run our parallelized Ftn-95 code on a Linux cluster with 
>OpenMPI-1-10.0 and Intel-16.0.0 Fortran compiler.
>
>In the code I use the  module MPI  (“use MPI”-stmts).
>
> 
>
>However I am not able to compile the code, because of compiler error messages 
>like this:
>
> 
>
>/src_SPRAY/mpi_wrapper.f90(2065): error #6285: There is no matching specific 
>subroutin for this generic subroutine call.   [MPI_REDUCE]
>
> 
>
> 
>
>The problem seems for me to be this one:
>
> 
>
>The interfaces in the module MPI for the MPI-routines do not accept a send or 
>receive buffer array, which is
>
>actually a variable, an array element or a constant (like MPI_IN_PLACE).
>
> 
>
>Example 1:
>
>     This does not work (gives the compiler error message:      error #6285: 
>There is no matching specific subroutin for this generic subroutine call  )
>
>             ivar=123    ! ß ivar is an integer variable, not an array
>
>          call MPI_BCAST( ivar, 1, MPI_INTEGER, 0, MPI_COMM_WORLD), ierr_mpi ) 
>   ! ß- this should work, but is not accepted by the compiler
>
> 
>
>      only this cumbersome workaround works:
>
>              ivar=123
>
>                allocate( iarr(1) )
>
>                iarr(1) = ivar
>
>         call MPI_BCAST( iarr, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr_mpi )   
> ! ß- this workaround works
>
>                ivar = iarr(1) 
>
>                deallocate( iarr(1) )       
>
> 
>
>Example 2:
>
>     Any call of an MPI-routine with MPI_IN_PLACE does not work, like that 
>coding:
>
> 
>
>      if(lmaster) then
>
>        call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8, MPI_MAX &   
> ! ß- this should work, but is not accepted by the compiler
>
>                                         ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
>
>      else  ! slaves
>
>        call MPI_REDUCE( rbuffarr, rdummyarr, nelem, MPI_REAL8, MPI_MAX &
>
>                        ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
>
>      endif
>
>     
>
>    This results in this compiler error message:
>
> 
>
>      /src_SPRAY/mpi_wrapper.f90(2122): error #6285: There is no matching 
>specific subroutine for this generic subroutine call.   [MPI_REDUCE]
>
>            call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8, MPI_MAX 
>&
>
>-------------^
>
> 
>
> 
>
>In our code I observed the bug with MPI_BCAST, MPI_REDUCE, MPI_ALLREDUCE,
>
>but probably there may be other MPI-routines with the same kind of bug.
>
> 
>
>This bug occurred for                               :     OpenMPI-1.10.0  with 
>Intel-16.0.0
>
>In contrast, this bug did NOT occur for:     OpenMPI-1.8.8    with Intel-16.0.0
>
>                                                                            
>OpenMPI-1.8.8    with Intel-15.0.3
>
>                                                                            
>OpenMPI-1.10.0  with gfortran-5.2.0
>
> 
>
>Greetings
>
>Michael Rachner
>

Reply via email to