Dear OpenMPI Experts

I am having trouble compiling code with MPI_STARTALL using
OpenMPI 1.4.2 mpif90 built with gcc (4.1.2) and Intel ifort (10.1.017),
when the array of requests is multi-dimensional.

It gives me this error message:

**************************
fortcom: Error: mpiwrap_mod.F90, line 478: There is no matching specific subroutine for this generic subroutine call. [MPI_STARTALL]
    call MPI_STARTALL(nreq,req,ierr)
---------^
**************************

However, if I replace MPI_STARTALL by a loop that calls
MPI_START repeated times, the compilation succeeds.
I wonder if the serialization imposed by the loop may
have some performance impact,
or if MPI_STARTALL just implements the same type of loop.

Another workaround is to declare my array of requests
as a one-dimensional assumed-size array inside my subroutine.

The problem seems to be that MPI_STARTALL doesn't handle multi-dimensional arrays of requests.

I can live with either workaround above,
but is this supposed to be so?

***

I poked around on the OpenMPI code in ompi/mpi/f90/scripts
and I found out that several OpenMPI Fortran90 subroutines
have code to handle arrays up to rank 7 (the Fortran90 maximum),
mostly for the send and receive buffers.

However, other subroutines and other array arguments, which can also
legitimately be multi-dimensional arrays, are not treated the same way.

In particular, there is only one (assumed-size) dimension for the
array of requests in MPI_STARTALL, for instance.
MPI_WAITALL is another example of restriction,
but there are probably other examples,
most likely on those subroutines that take request and status arrays.

I guess it would be nice if all OpenMPI
subroutines in the Fortran90 bindings would accept
arrays of rank up to 7 on all of their array dummy arguments.
Assuming this doesn't violate the MPI standard, of course.
This would allow more flexibility when writing MPI programs
in Fortran90.

***

More details:

This is the code that fails to compile (np is global in the module):

***********************************
  subroutine mpiwrap_startall(req)

    integer, dimension(0:np-1,0:1), intent(inout) :: req

    integer, parameter :: nreq = 2*np
    integer :: ierr

    call MPI_STARTALL(nreq,req,ierr)

  end subroutine mpiwrap_startall
********************************

This code that compiles:

*****************************
  subroutine mpiwrap_startall(req)

    integer, dimension(0:np-1,0:1), intent(inout) :: req
    integer :: pp, ii
    integer :: ierr

     do ii=0,1
       do pp=0,np-1
         call MPI_START(req(ii,pp),ierr)
       enddo
     enddo

  end subroutine mpiwrap_startall
*********************************

This is yet another code that compiles:

*********************************
  subroutine mpiwrap_startall(req)

    ! dummy arguments

    integer, dimension(*), intent(inout) :: req

    integer, parameter :: nreq = 2*np
    integer :: ierr

    call MPI_STARTALL(nreq,req,ierr)

  end subroutine mpiwrap_startall

*********************************


Many thanks,
Gus Correa

Reply via email to