Jeff,

>>> No.  Also note that in OMPI 1.7/1.8, we have renamed the Fortran
>>> wrapper to be mpifort -- mpif77 and mpif90 are sym links to mpifort
>>> provided simply for backwards compatibility.
>>
>> Thanks for the heads up. Complicates our configuration a little but good
>> to know. ;-)
> 
> I'm a little confused here -- I thought you said you wanted to replace your 
> wrappers with ours.

Yes. We want to use your Fortran wrappers, but provide our own C
wrappers. For more details see my comments below.

> If that's correct, why do you need to know linking order, etc.?
> 
> I.e., if you're using our wrappers, then you just call mpif77/mpif90/mpifort, 
> and you're done.
> 
>>> mpifort acts identically, regardless of whether it is invoked by the
>>> name "mpif77" or "mpif90" or "mpifort".
>>>
>>> In the 1.7/1.8 series, we link in all the Fortran libraries when you
>>> invoke mpifort, which allows you to use any of the 3 MPI Fortran
>>> interfaces (mpif.h, the mpi module, and the mpi_f08 module).  This
>>> is, of course, tempered by what you built and installed -- e.g., if
>>> you're using an old version of gfortran, the libmpi_usempif08 library
>>> won't be built, and therefore won't be linked in by mpifort, and "use
>>> mpi_f08" in applications will fail to compile.
>>
>> Ok. Is there a required order for those three libraries?
> 
> Yes.
> 
> ..but instead of answering your question directly, I'm going to ask: why do 
> you need to know?  :-)
> 
> Read below before answering.
> 
>> Score-P needs to get our C-wrappers inbetween your link line, though.
>>
>> As far as I understand, the order needs to be:
>>
>> mpifort user_code.f90 -o foo <ompi_fortran_wrappers> <scorep-c-wrappers>
>> <ompi_mpi_libs>
>>
>> Right?
> 
> No.  You shouldn't need to list the OMPI MPI libs at all -- the wrapper will 
> put those in for you.

Yes, but the I don't get any interposition for the Fortran calls.

> For example, in OMPI 1.6.x:
> 
> $ mpif77 hello_f77.f -o foo -lscorep
> 
> Turns into:
> 
> gfortran hello_f77.f -o foo -lscorep -I/home/jsquyres/bogus/include -pthread 
> -L/home/jsquyres/bogus/lib -lmpi_f77 -lmpi -ldl -lm -lnuma 
> -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl
> 
> Notice how -lscorep is to the left of all the OMPI libraries, so there's 
> nothing additional you need to add.
> 
> And in v1.8.x:
> 
> $ mpifort hello_f77.f -o foo -lscorep
> 
> Turns into:
> 
> gfortran hello_f77.f -o foo -lscorep -I/home/jsquyres/bogus/include -pthread 
> -I/home/jsquyres/bogus/lib -Wl,-rpath -Wl,/home/jsquyres/bogus/lib 
> -Wl,--enable-new-dtags -L/home/jsquyres/bogus/lib -lmpi_usempif08 
> -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi

Thanks for the clarification.

>> The user code generates unresolved symbols that are satisfied by the
>> fortran wrappers of OMPI. They in turn generate unresolved symbols to
>> the C functions, then intercepted by the Score-P wrappers, in turn
>> generating unresolved symbols to the core MPI functions, which are
>> satisfied by the rest of the OMPI link line.
> 
> This is the wrong scheme to use.  :-(

In the general case, yes. In the long-run, also yes. However, this is a
solution that works (and has worked) well for many Fortran applications.
Frankly, this is why the tools community never noticed that anything was
wrong in the first place. ;-)

We have identified some serious overhead issues with our own Fortran
wrappers in Score-P. Our old measurement system in Scalasca used the
proposed 'hack' and "works" with significantly lower overhead. This is
why we'd like to use OpenMPI's Fortran wrappers in Score-P as well, but
stumbled over the fact that we now have two or more libraries (and we
were used to only one). The idea is, to use the hack, until we have a
better working solution for the general case.

Doesn't OpenMPI ship Vampirtrace with it? How is this solved there
currently? My guess would be that exactly this scheme is used, but I am
willing to learn ;-)

> If you want to intercept Fortran function calls, you must do it in
> Fortran.  It is not sufficient to only intercept C calls if you
> intent to intercept all Fortran calls -- don't you remember our
> confusing discussions in the Forum Tools WG about this exact topic?
> :-(

Yes, I do remember this discussion, and yes, fixing this current issue
will not fix the general issue. However, as far as I understood, the
problem only affected MPI functions that use pointer-to-functions
because calling conventions are different from Fortran to C and you'd
need to know what kind of function the pointer points to, right?

So in principle this solution should work for applications that do not
use error handlers or user-defined operators. Yes, I agree a general
solution that very likely involves Fortran wrappers not to call their C
counterparts is desirable in the long run.

> If -lscorep contains Fortran MPI wrappers (i.e., you're intercepting
> the Fortran MPI API calls), then you can turn around and call PMPI
> Fortran calls.  The MPI-3.0 + errata contains the right symbol
> methodology you must use to intercept Fortran subroutine/function
> calls for MPI.

Yes, but we are not talking MPI 3.0 or mpi_f08 here. This is plain old
mpif.h and 'use mpi'.

In the long run, my vision is to have Fortran wrappers that use a
Fortran interface definition of the relevant part of the measurement
system for calling into the measurement system directly and then calling
the Fortran PMPI function. However, this will take some time (convince
people, learn Fortran, write code, etc.).

Cheers,
Marc-Andre
-- 
Marc-Andre Hermanns
Jülich Aachen Research Alliance,
High Performance Computing (JARA-HPC)
German Research School for Simulation Sciences GmbH

Schinkelstrasse 2
52062 Aachen
Germany

Phone: +49 241 80 99753
Fax: +49 241 80 6 99753
www.grs-sim.de/parallel
email: m.a.herma...@grs-sim.de

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to