The attached code is from the example on page 629-630 (17.1.15 Fortran Derived Types) of MPI-3. This compiles cleanly with MPICH and with OMPI 1.6.5, but not with the latest OMPI. Arrays higher than rank 4 would have a similar problem since they are not enumerated. Did someone decide that a necessarily-incomplete enumeration of types was "good enough" and that other users should use some other workaround?
$ ~/usr/ompi/bin/mpifort -c struct.f90 struct.f90:40.55: call MPI_SEND(foo, 1, newtype, dest, tag, comm, ierr) 1 Error: There is no specific subroutine for the generic 'mpi_send' at (1) struct.f90:43.48: call MPI_GET_ADDRESS(fooarr(1), disp(1), ierr) 1 Error: There is no specific subroutine for the generic 'mpi_get_address' at (1) struct.f90:44.48: call MPI_GET_ADDRESS(fooarr(2), disp(2), ierr) 1 Error: There is no specific subroutine for the generic 'mpi_get_address' at (1) struct.f90:50.61: call MPI_SEND(fooarr, 5, newarrtype, dest, tag, comm, ierr) 1 Error: There is no specific subroutine for the generic 'mpi_send' at (1) $ ~/usr/ompi/bin/ompi_info Package: Open MPI jed@batura Distribution Open MPI: 1.9a1 Open MPI repo revision: r29531M Open MPI release date: Oct 26, 2013 Open RTE: 1.9a1 Open RTE repo revision: r29531M Open RTE release date: Oct 26, 2013 OPAL: 1.9a1 OPAL repo revision: r29531M OPAL release date: Oct 26, 2013 MPI API: 2.2 Ident string: 1.9a1 Prefix: /home/jed/usr/ompi Configured architecture: x86_64-unknown-linux-gnu Configure host: batura Configured by: jed Configured on: Mon Jan 6 19:38:01 CST 2014 Configure host: batura Built by: jed Built on: Mon Jan 6 19:49:41 CST 2014 Built host: batura C bindings: yes C++ bindings: no Fort mpif.h: yes (all) Fort use mpi: yes (limited: overloading) Fort use mpi size: deprecated-ompi-info-value Fort use mpi_f08: no Fort mpi_f08 compliance: The mpi_f08 module was not built Fort mpi_f08 subarrays: no Java bindings: no Wrapper compiler rpath: runpath C compiler: gcc C compiler absolute: /usr/bin/gcc C compiler family name: GNU C compiler version: 4.8.2 C++ compiler: g++ C++ compiler absolute: /usr/bin/g++ Fort compiler: /usr/bin/gfortran Fort compiler abs: Fort ignore TKR: no Fort 08 assumed shape: no Fort optional args: no Fort BIND(C): no Fort PRIVATE: no Fort ABSTRACT: no Fort ASYNCHRONOUS: no Fort PROCEDURE: no Fort f08 using wrappers: yes C profiling: yes C++ profiling: no Fort mpif.h profiling: yes Fort use mpi profiling: yes Fort use mpi_f08 prof: no C++ exceptions: no Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes, OMPI progress: no, ORTE progress: yes, Event lib: yes) Sparse Groups: no Internal debug support: yes MPI interface warnings: yes MPI parameter check: runtime Memory profiling support: no Memory debugging support: no libltdl support: yes Heterogeneous support: no mpirun default --prefix: no MPI I/O support: yes MPI_WTIME support: gettimeofday Symbol vis. support: yes Host topology support: yes MPI extensions: FT Checkpoint support: no (checkpoint thread: no) C/R Enabled Debugging: no VampirTrace support: yes MPI_MAX_PROCESSOR_NAME: 256 MPI_MAX_ERROR_STRING: 256 MPI_MAX_OBJECT_NAME: 64 MPI_MAX_INFO_KEY: 36 MPI_MAX_INFO_VAL: 256 MPI_MAX_PORT_NAME: 1024 MPI_MAX_DATAREP_STRING: 128 MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.9) MCA compress: bzip (MCA v2.0, API v2.0, Component v1.9) MCA compress: gzip (MCA v2.0, API v2.0, Component v1.9) MCA crs: none (MCA v2.0, API v2.0, Component v1.9) MCA db: hash (MCA v2.0, API v1.0, Component v1.9) MCA db: print (MCA v2.0, API v1.0, Component v1.9) MCA event: libevent2021 (MCA v2.0, API v2.0, Component v1.9) MCA hwloc: external (MCA v2.0, API v2.0, Component v1.9) MCA if: linux_ipv6 (MCA v2.0, API v2.0, Component v1.9) MCA if: posix_ipv4 (MCA v2.0, API v2.0, Component v1.9) MCA installdirs: env (MCA v2.0, API v2.0, Component v1.9) MCA installdirs: config (MCA v2.0, API v2.0, Component v1.9) MCA memchecker: valgrind (MCA v2.0, API v2.0, Component v1.9) MCA memory: linux (MCA v2.0, API v2.0, Component v1.9) MCA pstat: linux (MCA v2.0, API v2.0, Component v1.9) MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.9) MCA shmem: posix (MCA v2.0, API v2.0, Component v1.9) MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.9) MCA timer: linux (MCA v2.0, API v2.0, Component v1.9) MCA dfs: app (MCA v2.0, API v1.0, Component v1.9) MCA dfs: orted (MCA v2.0, API v1.0, Component v1.9) MCA dfs: test (MCA v2.0, API v1.0, Component v1.9) MCA errmgr: default_app (MCA v2.0, API v3.0, Component v1.9) MCA errmgr: default_hnp (MCA v2.0, API v3.0, Component v1.9) MCA errmgr: default_orted (MCA v2.0, API v3.0, Component v1.9) MCA errmgr: default_tool (MCA v2.0, API v3.0, Component v1.9) MCA ess: env (MCA v2.0, API v3.0, Component v1.9) MCA ess: hnp (MCA v2.0, API v3.0, Component v1.9) MCA ess: singleton (MCA v2.0, API v3.0, Component v1.9) MCA ess: slurm (MCA v2.0, API v3.0, Component v1.9) MCA ess: tool (MCA v2.0, API v3.0, Component v1.9) MCA filem: raw (MCA v2.0, API v2.0, Component v1.9) MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.9) MCA iof: hnp (MCA v2.0, API v2.0, Component v1.9) MCA iof: mr_hnp (MCA v2.0, API v2.0, Component v1.9) MCA iof: mr_orted (MCA v2.0, API v2.0, Component v1.9) MCA iof: orted (MCA v2.0, API v2.0, Component v1.9) MCA iof: tool (MCA v2.0, API v2.0, Component v1.9) MCA odls: default (MCA v2.0, API v2.0, Component v1.9) MCA oob: tcp (MCA v2.0, API v2.0, Component v1.9) MCA plm: rsh (MCA v2.0, API v2.0, Component v1.9) MCA plm: slurm (MCA v2.0, API v2.0, Component v1.9) MCA ras: loadleveler (MCA v2.0, API v2.0, Component v1.9) MCA ras: simulator (MCA v2.0, API v2.0, Component v1.9) MCA ras: slurm (MCA v2.0, API v2.0, Component v1.9) MCA rmaps: lama (MCA v2.0, API v2.0, Component v1.9) MCA rmaps: mindist (MCA v2.0, API v2.0, Component v1.9) MCA rmaps: ppr (MCA v2.0, API v2.0, Component v1.9) MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.9) MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.9) MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.9) MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.9) MCA rmaps: staged (MCA v2.0, API v2.0, Component v1.9) MCA rml: oob (MCA v2.0, API v2.0, Component v1.9) MCA routed: binomial (MCA v2.0, API v2.0, Component v1.9) MCA routed: debruijn (MCA v2.0, API v2.0, Component v1.9) MCA routed: direct (MCA v2.0, API v2.0, Component v1.9) MCA routed: radix (MCA v2.0, API v2.0, Component v1.9) MCA state: app (MCA v2.0, API v1.0, Component v1.9) MCA state: hnp (MCA v2.0, API v1.0, Component v1.9) MCA state: novm (MCA v2.0, API v1.0, Component v1.9) MCA state: orted (MCA v2.0, API v1.0, Component v1.9) MCA state: staged_hnp (MCA v2.0, API v1.0, Component v1.9) MCA state: staged_orted (MCA v2.0, API v1.0, Component v1.9) MCA state: tool (MCA v2.0, API v1.0, Component v1.9) MCA allocator: basic (MCA v2.0, API v2.0, Component v1.9) MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.9) MCA bcol: basesmuma (MCA v2.0, API v2.0, Component v1.9) MCA bcol: ptpcoll (MCA v2.0, API v2.0, Component v1.9) MCA bml: r2 (MCA v2.0, API v2.0, Component v1.9) MCA btl: self (MCA v2.0, API v2.0, Component v1.9) MCA btl: sm (MCA v2.0, API v2.0, Component v1.9) MCA btl: tcp (MCA v2.0, API v2.0, Component v1.9) MCA btl: vader (MCA v2.0, API v2.0, Component v1.9) MCA coll: basic (MCA v2.0, API v2.0, Component v1.9) MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.9) MCA coll: inter (MCA v2.0, API v2.0, Component v1.9) MCA coll: libnbc (MCA v2.0, API v2.0, Component v1.9) MCA coll: ml (MCA v2.0, API v2.0, Component v1.9) MCA coll: self (MCA v2.0, API v2.0, Component v1.9) MCA coll: sm (MCA v2.0, API v2.0, Component v1.9) MCA coll: tuned (MCA v2.0, API v2.0, Component v1.9) MCA dpm: orte (MCA v2.0, API v2.0, Component v1.9) MCA fbtl: posix (MCA v2.0, API v2.0, Component v1.9) MCA fcoll: dynamic (MCA v2.0, API v2.0, Component v1.9) MCA fcoll: individual (MCA v2.0, API v2.0, Component v1.9) MCA fcoll: static (MCA v2.0, API v2.0, Component v1.9) MCA fcoll: two_phase (MCA v2.0, API v2.0, Component v1.9) MCA fs: ufs (MCA v2.0, API v2.0, Component v1.9) MCA io: ompio (MCA v2.0, API v2.0, Component v1.9) MCA io: romio (MCA v2.0, API v2.0, Component v1.9) MCA mpool: grdma (MCA v2.0, API v2.0, Component v1.9) MCA mpool: sm (MCA v2.0, API v2.0, Component v1.9) MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.9) MCA osc: rdma (MCA v2.0, API v2.0, Component v1.9) MCA pml: v (MCA v2.0, API v2.0, Component v1.9) MCA pml: bfo (MCA v2.0, API v2.0, Component v1.9) MCA pml: cm (MCA v2.0, API v2.0, Component v1.9) MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.9) MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.9) MCA rcache: vma (MCA v2.0, API v2.0, Component v1.9) MCA rte: orte (MCA v2.0, API v2.0, Component v1.9) MCA sbgp: basesmsocket (MCA v2.0, API v2.0, Component v1.9) MCA sbgp: basesmuma (MCA v2.0, API v2.0, Component v1.9) MCA sbgp: p2p (MCA v2.0, API v2.0, Component v1.9) MCA sharedfp: individual (MCA v2.0, API v2.0, Component v1.9) MCA sharedfp: lockedfile (MCA v2.0, API v2.0, Component v1.9) MCA sharedfp: sm (MCA v2.0, API v2.0, Component v1.9) MCA topo: basic (MCA v2.0, API v2.1, Component v1.9) MCA vprotocol: pessimist (MCA v2.0, API v2.0, Component v1.9)
subroutine foobar use mpi type, BIND(C) :: mytype integer :: i real :: x double precision :: d logical :: l end type mytype type(mytype) :: foo, fooarr(5) integer :: blocklen(4), type(4) integer(KIND=MPI_ADDRESS_KIND) :: disp(4), base, lb, extent call MPI_GET_ADDRESS(foo%i, disp(1), ierr) call MPI_GET_ADDRESS(foo%x, disp(2), ierr) call MPI_GET_ADDRESS(foo%d, disp(3), ierr) call MPI_GET_ADDRESS(foo%l, disp(4), ierr) base = disp(1) disp(1) = disp(1) - base disp(2) = disp(2) - base disp(3) = disp(3) - base disp(4) = disp(4) - base blocklen(1) = 1 blocklen(2) = 1 blocklen(3) = 1 blocklen(4) = 1 type(1) = MPI_INTEGER type(2) = MPI_REAL type(3) = MPI_DOUBLE_PRECISION type(4) = MPI_LOGICAL call MPI_TYPE_CREATE_STRUCT(4, blocklen, disp, type, newtype, ierr) call MPI_TYPE_COMMIT(newtype, ierr) ! call MPI_SEND(foo%i, 1, newtype, dest, tag, comm, ierr) ! or call MPI_SEND(foo, 1, newtype, dest, tag, comm, ierr) ! expects that base == address(foo%i) == address(foo) call MPI_GET_ADDRESS(fooarr(1), disp(1), ierr) call MPI_GET_ADDRESS(fooarr(2), disp(2), ierr) extent = disp(2) - disp(1) lb = 0 call MPI_TYPE_CREATE_RESIZED(newtype, lb, extent, newarrtype, ierr) call MPI_TYPE_COMMIT(newarrtype, ierr) call MPI_SEND(fooarr, 5, newarrtype, dest, tag, comm, ierr) end subroutine foobar
pgpPlEoB9ffem.pgp
Description: PGP signature