I am happy to report that my build of openMPI is now fully functional,
using INCLUDE "mpif.h".  I have compiled and run my production code,
and it even looks to be slightly faster than when I utilize MPICH2.
Once again, I'd like to thank you guys for your help!  I can't even
remember the last time I got so much patient help from a forum, you
guys rock!

On Tue, Sep 23, 2008 at 6:06 AM, Jeff Squyres <jsquy...@cisco.com> wrote:
> It's not entirely clear from the later messages whether you got it running
> with mpif.h or "use mpi".
>
> What is the source code around the problematic line when you "use mpi"?
>  (including all the declarations of the arguments, etc.)
>
> MPICH2's F90 module is a bit different than ours -- I don't remember if they
> do MPI_SEND and all of the single-choice buffer function overloads (because
> of the problem I described yesterday).
>
> Since we already found the compile bug with the F90 module, I'd like to
> ensure that we don't also have an F90 MPI interface bug, too.
>
>
> On Sep 22, 2008, at 8:54 PM, Brian Harker wrote:
>
>> Yes I have matched all the arguments.  I should mention that the code
>> compiles and runs flawlessly using MPICH2-1.0.7 so it's got to be an
>> issue with my specific build of openMPI. I want to get openMPI up and
>> running for performance comparisons.
>>
>> On Mon, Sep 22, 2008 at 6:43 PM, Jeff Squyres <jsquy...@cisco.com> wrote:
>>>
>>> What's the source code in question, then?  Did you match all the
>>> arguments?
>>>
>>>
>>> On Sep 22, 2008, at 8:36 PM, Brian Harker wrote:
>>>
>>>> Nope, no user-defined types or arrays greater than 2 dimensions.
>>>>
>>>> On Mon, Sep 22, 2008 at 6:24 PM, Jeff Squyres <jsquy...@cisco.com>
>>>> wrote:
>>>>>
>>>>> On Sep 22, 2008, at 6:48 PM, Brian Harker wrote:
>>>>>
>>>>>> when I compile my production code, I get:
>>>>>>
>>>>>> fortcom: Error: driver.f90: line 211: There is no matching specific
>>>>>> subroutine for this generic subroutine call.   [MPI_SEND]
>>>>>>
>>>>>> Seems odd that it would spit up on MPI_SEND, but has no problem with
>>>>>> MPI_RECV...  What do you guys think?  And thanks again for your help
>>>>>> and patience?
>>>>>
>>>>> The F90 MPI bindings have some well-known design flaws (i.e., problems
>>>>> with
>>>>> the standard itself, not any particular implementation).  Many of them
>>>>> center around the fact that F90 is a strongly-typed language.  See this
>>>>> paper for some details:
>>>>>
>>>>> http://www.open-mpi.org/papers/euro-pvmmpi-2005-fortran/
>>>>>
>>>>> Here's the highlights, as they pertain to writing F90 MPI apps:
>>>>>
>>>>> - There is no equivalent to C's (void*).  This means that the F90 MPI
>>>>> bindings cannot accept user-defined datatypes.
>>>>>
>>>>> - This also means that *every* pre-defined type must have a F90 MPI
>>>>> binding.
>>>>> There are approximately 15 intrinsic size/type combinations.  There are
>>>>> 50
>>>>> MPI functions that take one choice buffer (e.g., MPI_SEND, etc.), and
>>>>> 25
>>>>> functions that take two choice buffers (e.g., MPI_REDUCE).  I'm copying
>>>>> this
>>>>> math from the paper, and I think we got it slightly wrong (there was a
>>>>> discussion about it on this list a while ago), but it results in many
>>>>> *millions* of F90 MPI bindings functions.  There's no compiler on the
>>>>> planet
>>>>> than can handle all of these in a single F90 module.
>>>>>
>>>>> Open MPI compensates for this with the following:
>>>>>
>>>>> - F90 bindings are not created for any of the 2-choice-buffer functions
>>>>> - F90 bindings are created for all the 1-choice-buffer functions, but
>>>>> only
>>>>> for dimensions up to N dimensions (N defaults to 4, IIRC).  You can
>>>>> change
>>>>> the value of N with OMPI's configure script; use the
>>>>> --with-f90-max-array-dim.  The maximum value of N is 7.
>>>>>
>>>>> So -- your app failed to compile because you either used a user-defined
>>>>> datatype or you used an array with a dimension greater than 4.  If it
>>>>> was
>>>>> a
>>>>> greater-dimension issue, you can reconfigure/recompile/reinstall OMPI
>>>>> (again, sorry) with a larger N value.  If it was a user-defined
>>>>> datatype,
>>>>> you unfortunately have to "include mpif.h" in that
>>>>> subroutine/function/whatever, sorry (and you lose the type checking).
>>>>> :-(
>>>>>
>>>>> Here's some info from OMPI's README:
>>>>>
>>>>> -----
>>>>> - The Fortran 90 MPI bindings can now be built in one of three sizes
>>>>> using --with-mpi-f90-size=SIZE (see description below).  These sizes
>>>>> reflect the number of MPI functions included in the "mpi" Fortran 90
>>>>> module and therefore which functions will be subject to strict type
>>>>> checking.  All functions not included in the Fortran 90 module can
>>>>> still be invoked from F90 applications, but will fall back to
>>>>> Fortran-77 style checking (i.e., little/none).
>>>>>
>>>>> - trivial: Only includes F90-specific functions from MPI-2.  This
>>>>> means overloaded versions of MPI_SIZEOF for all the MPI-supported
>>>>> F90 intrinsic types.
>>>>>
>>>>> - small (default): All the functions in "trivial" plus all MPI
>>>>> functions that take no choice buffers (meaning buffers that are
>>>>> specified by the user and are of type (void*) in the C bindings --
>>>>> generally buffers specified for message passing).  Hence,
>>>>> functions like MPI_COMM_RANK are included, but functions like
>>>>> MPI_SEND are not.
>>>>>
>>>>> - medium: All the functions in "small" plus all MPI functions that
>>>>> take one choice buffer (e.g., MPI_SEND, MPI_RECV, ...).  All
>>>>> one-choice-buffer functions have overloaded variants for each of
>>>>> the MPI-supported Fortran intrinsic types up to the number of
>>>>> dimensions specified by --with-f90-max-array-dim (default value is
>>>>> 4).
>>>>>
>>>>> Increasing the size of the F90 module (in order from trivial, small,
>>>>> and medium) will generally increase the length of time required to
>>>>> compile user MPI applications.  Specifically, "trivial"- and
>>>>> "small"-sized F90 modules generally allow user MPI applications to
>>>>> be compiled fairly quickly but lose type safety for all MPI
>>>>> functions with choice buffers.  "medium"-sized F90 modules generally
>>>>> take longer to compile user applications but provide greater type
>>>>> safety for MPI functions.
>>>>>
>>>>> Note that MPI functions with two choice buffers (e.g., MPI_GATHER)
>>>>> are not currently included in Open MPI's F90 interface.  Calls to
>>>>> these functions will automatically fall through to Open MPI's F77
>>>>> interface.  A "large" size that includes the two choice buffer MPI
>>>>> functions is possible in future versions of Open MPI.
>>>>> -----
>>>>>
>>>>> FWIW, we're arguing^H^H^H^H^H^H^Hdiscussing new Fortran 2003 bindings
>>>>> for
>>>>> MPI in the MPI-3 Forum right now.  We have already addressed the
>>>>> problems
>>>>> discussed above (F03 now has an equivalent of (void*)), and hope to do
>>>>> a
>>>>> few
>>>>> more minor things as well.  There's also discussion of the possibility
>>>>> of
>>>>> a
>>>>> Boost.MPI-like Fortran 2003 MPI library that would take advantage of
>>>>> many
>>>>> of
>>>>> the features of the language, but be a little farther away from the
>>>>> official
>>>>> MPI bindings (see www.boost-org for details about how their nifty C++
>>>>> library works on top of MPI).
>>>>>
>>>>> --
>>>>> Jeff Squyres
>>>>> Cisco Systems
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cheers,
>>>> Brian
>>>> brian.har...@gmail.com
>>>>
>>>>
>>>> "In science, there is only physics; all the rest is stamp-collecting."
>>>> -Ernest Rutherford
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>> --
>>> Jeff Squyres
>>> Cisco Systems
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>>
>>
>> --
>> Cheers,
>> Brian
>> brian.har...@gmail.com
>>
>>
>> "In science, there is only physics; all the rest is stamp-collecting."
>> -Ernest Rutherford
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
Cheers,
Brian
brian.har...@gmail.com


"In science, there is only physics; all the rest is stamp-collecting."
 -Ernest Rutherford

Reply via email to