Michael --

You're right again.  Thanks for keeping us honest!

We clearly did not think through all the issues for the "large" F90
interface; I've opened ticket #55 for the issue.  I'm inclined to take
the same approach as for the other issues you discovered -- disable
"large" for v1.1 and push the fixes to v1.2.

https://svn.open-mpi.org/trac/ompi/ticket/55

> -----Original Message-----
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens
> Sent: Tuesday, May 30, 2006 3:40 PM
> To: Open MPI Users
> Subject: [OMPI users] MPI_GATHER: missing f90 interfaces for 
> mixed dimensions
> 
> Looking at limitations of the following:
> 
>    --with-mpi-f90-size=SIZE
>                            specify the types of functions in the  
> Fortran 90 MPI
>                            module, where size is one of: 
> trivial (MPI-2
>                            F90-specific functions only), small  
> (trivial + all
>                            MPI functions without choice buffers),  
> medium (small
>                            + all MPI functions with one choice  
> buffer), large
>                            (medium + all MPI functions with 2 choice  
> buffers,
>                            but only when both buffers are the 
> same type)
> 
> Not sure what "same type" was intended to mean here, if same type  
> than reasonable, but if same type and dimension (and as implemented)  
> then I can't see where any generic installation, i.e. more than one  
> user, could use --with-mpi-f90-size=large.  If fact I found one case  
> where a bunch of the generated interfaces are a waste of space and a  
> really bad idea as far as I can tell.
> 
> --------------------------------------------------------------
> ---------- 
> ------------------------------------------------
> subroutine MPI_Gather0DI4(sendbuf, sendcount, sendtype, recvbuf,  
> recvcount, &
>          recvtype, root, comm, ierr)
>    include 'mpif-common.h'
>    integer*4, intent(in) :: sendbuf
>    integer, intent(in) :: sendcount
>    integer, intent(in) :: sendtype
>    integer*4, intent(out) :: recvbuf
>    integer, intent(in) :: recvcount
>    integer, intent(in) :: recvtype
>    integer, intent(in) :: root
>    integer, intent(in) :: comm
>    integer, intent(out) :: ierr
> end subroutine MPI_Gather0DI4
> 
> Think about it, all processes are sending data back to root, if each  
> sends a single integer where does the second, third, fourth, etc.  
> integer go?
> --------------------------------------------------------------
> ---------- 
> ------------------------------------------------
> 
> The interfaces for MPI_GATHER do not include the possibility 
> that the  
> sendbuf is an integer and the recvbuffer is an integer array, for  
> example the following does not exist but seems legal or should be  
> legal (and should at the very least replace the above interface):
> --------------------------------------------------------------
> ---------- 
> ------------------------------------------------
> subroutine MPI_Gather01DI4(sendbuf, sendcount, sendtype, recvbuf,  
> recvcount, &
>          recvtype, root, comm, ierr)
>    include 'mpif-common.h'
>    integer*4, intent(in) :: sendbuf
>    integer, intent(in) :: sendcount
>    integer, intent(in) :: sendtype
>    integer*4, dimension(:), intent(out) :: recvbuf
>    integer, intent(in) :: recvcount
>    integer, intent(in) :: recvtype
>    integer, intent(in) :: root
>    integer, intent(in) :: comm
>    integer, intent(out) :: ierr
> end subroutine MPI_Gather01DI4
> --------------------------------------------------------------
> ---------- 
> ------------------------------------------------
> 
> Also, consider that there may be no reason to restrict sendbuf and  
> recvbuf have the same number of dimensions, but it is reasonable to  
> expect sendbuf to have the same or less dimensions as recvbuf 
> (except  
> both being a scalar seems unreasonable).  This does complicate the  
> issue from an order (N+1) problem to an order (N+1)*(N+2)/2 problem,  
> where is N = 4 unless otherwise restricted, but should be doable and  
> certain functions should have the 0,0 case eliminated.
> 
> Testing OpenMPI 1.2a1r10111 (g95 on OS X 10.4.6), configured with "./ 
> configure F77=g95 FC=g95 LDFLAGS=-lSystemStubs --with-mpi-f90- 
> size=large --enable-static"
> 
> ------------
> call MPI_GATHER(nn,1,MPI_INTEGER,nno,1,MPI_INTEGER,0,allmpi,ier)
>                                                               
>            
>   1
> Error: Generic subroutine 'mpi_gather' at (1) is not consistent with  
> a specific subroutine interface
> ----------
> 
> I'm doing my development on four different machines each with a  
> different compiler and a different MPI library, one of those (not  
> OpenMPI) spotted that I had forgotten ierr so it definitely was  
> checking the interfaces but was able to handle this problem (and  
> quickly too).  In my Fortran 90 experience I'm not aware of a better  
> way to handle these generic interfaces but I have not studied this  
> issue closely enough.
> 
> Below is my solution for the generating scripts for MPI_Gather for  
> F90 (also switched to --with-f90-max-array-dim=2).  It might be  
> acceptable to reduce the combinations to just equal or one dimension  
> less (00,01,11,12,22) but I pushed the limits of my shell scripting  
> abilities.
> 
> Michael
> 
> ---------- mpi-f90-interfaces.h.sh
> #-------------------------------------------------------------
> ---------- 
> -
> 
> output_120() {
>      if test "$output" = "0"; then
>          return 0
>      fi
>      procedure=$1
>      rank=$2
>      rank2=$3
>      type=$5
>      type2=$6
>      proc="$1$2$3D$4"
>      cat <<EOF
> 
> subroutine ${proc}(sendbuf, sendcount, sendtype, recvbuf, recvcount, &
>          recvtype, root, comm, ierr)
>    include 'mpif-common.h'
>    ${type}, intent(in) :: sendbuf
>    integer, intent(in) :: sendcount
>    integer, intent(in) :: sendtype
>    ${type2}, intent(out) :: recvbuf
>    integer, intent(in) :: recvcount
>    integer, intent(in) :: recvtype
>    integer, intent(in) :: root
>    integer, intent(in) :: comm
>    integer, intent(out) :: ierr
> end subroutine ${proc}
> 
> EOF
> }
> 
> start MPI_Gather large
> 
> for rank in $allranks
> do
>    case "$rank" in  0)  dim=''  ;  esac
>    case "$rank" in  1)  dim=', dimension(:)'  ;  esac
>    case "$rank" in  2)  dim=', dimension(:,:)'  ;  esac
>    case "$rank" in  3)  dim=', dimension(:,:,:)'  ;  esac
>    case "$rank" in  4)  dim=', dimension(:,:,:,:)'  ;  esac
>    case "$rank" in  5)  dim=', dimension(:,:,:,:,:)'  ;  esac
>    case "$rank" in  6)  dim=', dimension(:,:,:,:,:,:)'  ;  esac
>    case "$rank" in  7)  dim=', dimension(:,:,:,:,:,:,:)'  ;  esac
> 
>    for rank2 in $allranks
>    do
>      case "$rank2" in  0)  dim2=''  ;  esac
>      case "$rank2" in  1)  dim2=', dimension(:)'  ;  esac
>      case "$rank2" in  2)  dim2=', dimension(:,:)'  ;  esac
>      case "$rank2" in  3)  dim2=', dimension(:,:,:)'  ;  esac
>      case "$rank2" in  4)  dim2=', dimension(:,:,:,:)'  ;  esac
>      case "$rank2" in  5)  dim2=', dimension(:,:,:,:,:)'  ;  esac
>      case "$rank2" in  6)  dim2=', dimension(:,:,:,:,:,:)'  ;  esac
>      case "$rank2" in  7)  dim2=', dimension(:,:,:,:,:,:,:)'  ;  esac
> 
>      if [ ${rank2} != "0" ] && [ ${rank2} -ge ${rank} ]; then
> 
>      output_120 MPI_Gather ${rank} ${rank2} CH "character${dim}"  
> "character${dim2}"
>      output_120 MPI_Gather ${rank} ${rank2} L "logical${dim}" 
> "logical 
> ${dim2}"
>      for kind in $ikinds
>      do
>        output_120 MPI_Gather ${rank} ${rank2} I${kind} "integer*$ 
> {kind}${dim}"  "integer*${kind}${dim2}"
>      done
>      for kind in $rkinds
>      do
>        output_120 MPI_Gather ${rank} ${rank2} R${kind} "real*${kind}$ 
> {dim}" "real*${kind}${dim2}"
>      done
>      for kind in $ckinds
>      do
>        output_120 MPI_Gather ${rank} ${rank2} C${kind} "complex*$ 
> {kind}${dim}"  "complex*${kind}${dim2}"
>      done
> 
>      fi
>    done
> done
> end MPI_Gather
> ----------
> --- mpi_gather_f90.f90.sh
> output() {
>      procedure=$1
>      rank=$2
>      rank2=$3
>      type=$5
>      type2=$6
>      proc="$1$2$3D$4"
>      cat <<EOF
> 
> subroutine ${proc}(sendbuf, sendcount, sendtype, recvbuf, recvcount, &
>          recvtype, root, comm, ierr)
>    include "mpif-common.h"
>    ${type}, intent(in) :: sendbuf
>    integer, intent(in) :: sendcount
>    integer, intent(in) :: sendtype
>    ${type2}, intent(out) :: recvbuf
>    integer, intent(in) :: recvcount
>    integer, intent(in) :: recvtype
>    integer, intent(in) :: root
>    integer, intent(in) :: comm
>    integer, intent(out) :: ierr
>    call ${procedure}(sendbuf, sendcount, sendtype, recvbuf, 
> recvcount, &
>          recvtype, root, comm, ierr)
> end subroutine ${proc}
> 
> EOF
> }
> 
> for rank in $allranks
> do
>    case "$rank" in  0)  dim=''  ;  esac
>    case "$rank" in  1)  dim=', dimension(:)'  ;  esac
>    case "$rank" in  2)  dim=', dimension(:,:)'  ;  esac
>    case "$rank" in  3)  dim=', dimension(:,:,:)'  ;  esac
>    case "$rank" in  4)  dim=', dimension(:,:,:,:)'  ;  esac
>    case "$rank" in  5)  dim=', dimension(:,:,:,:,:)'  ;  esac
>    case "$rank" in  6)  dim=', dimension(:,:,:,:,:,:)'  ;  esac
>    case "$rank" in  7)  dim=', dimension(:,:,:,:,:,:,:)'  ;  esac
> 
>    for rank2 in $allranks
>    do
>      case "$rank2" in  0)  dim2=''  ;  esac
>      case "$rank2" in  1)  dim2=', dimension(:)'  ;  esac
>      case "$rank2" in  2)  dim2=', dimension(:,:)'  ;  esac
>      case "$rank2" in  3)  dim2=', dimension(:,:,:)'  ;  esac
>      case "$rank2" in  4)  dim2=', dimension(:,:,:,:)'  ;  esac
>      case "$rank2" in  5)  dim2=', dimension(:,:,:,:,:)'  ;  esac
>      case "$rank2" in  6)  dim2=', dimension(:,:,:,:,:,:)'  ;  esac
>      case "$rank2" in  7)  dim2=', dimension(:,:,:,:,:,:,:)'  ;  esac
> 
>      if [ ${rank2} != "0" ] && [ ${rank2} -ge ${rank} ]; then
> 
>        output MPI_Gather ${rank} ${rank2} CH "character${dim}"  
> "character${dim2}"
>        output MPI_Gather ${rank} ${rank2} L "logical${dim}" "logical$ 
> {dim2}"
>        for kind in $ikinds
>        do
>          output MPI_Gather ${rank} ${rank2} I${kind} 
> "integer*${kind}$ 
> {dim}"  "integer*${kind}${dim2}"
>        done
>        for kind in $rkinds
>        do
>          output MPI_Gather ${rank} ${rank2} R${kind} "real*${kind}$ 
> {dim}" "real*${kind}${dim2}"
>        done
>        for kind in $ckinds
>        do
>          output MPI_Gather ${rank} ${rank2} C${kind} 
> "complex*${kind}$ 
> {dim}"  "complex*${kind}${dim2}"
>        done
> 
>      fi
>    done
> done
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 

Reply via email to