Hello Gilles,

  some comments interspersed

On 11/06/2015 02:50 PM, Gilles Gouaillardet wrote:
Harald,

the answer is in ompi/mca/coll/libnbc/nbc_ibcast.c

this has been revamped (but not 100%) in v2.x
(e.g. no more calls to MPI_Comm_{size,rank} but MPI_Type_size is still
being invoked)

Ah! it is interesting to know this pointer, thanks. Looks like others such as igather and ireduce also have this issue :S

I will review this.
basically, no MPI_* should be invoked internally (e.g. we should use the
internal ompi_* or the PMPI_* symbol.

there is currently no plan for a v1.10.2 release, so you have to wait
for the v2.0.0)

Is it possible to know when this behavior was introduced? Maybe since the first MPI3 implementation (was it OpenMPI 1.8?).


note you should wrap the C bindings (with a C library) and the Fortran
bindings (with a Fortran library).
currently, the fortran wrapper will likely invoke the C wrapper, but
that will no more be the case from v2.x

Oh! That's a pity. We usually use the LD_PRELOAD technique to inject the instrumentation and since fortran wrapper invokes the C wrapper, we can instrument either fortran or C applications with a single instrumentation library. Other MPI implementations (I won't say names here) also have this C/Fortran "separation", which requires us to generate two instrumentation libraries, one for C and another for Fortran apps.

Thank you!

Cheers,

Gilles

On Friday, November 6, 2015, Harald Servat <harald.ser...@bsc.es
<mailto:harald.ser...@bsc.es>> wrote:

    Dear all,

       we develop an instrumentation package based on PMPI and we've
    observed that PMPI_Ibarrier and PMPI_Ibcast invoke regular
    MPI_Comm_size and MPI_Comm_rank instead to the PMPI symbols (i.e.
    PMPI_Comm_size and PMPI_Comm_rank) in OpenMPI 1.10.0.

       I have attached simple example that demonstrates it when using
    OpenMPI 1.10.0. The example creates a library (libinstrument) that
    hooks MPI_Comm_size, MPI_Comm_rank and MPI_Ibarrier. Then, there's a
    single MPI application that executes MPI_Ibarrier and waits for it.
    The result when combining this binary with the instrumentation
    library is the following:

    # ~/aplic/openmpi/1.10.0/bin/mpirun -np 1 ./main
    entering MPI_Ibarrier
    entering MPI_Comm_rank
    leaving MPI_Comm_rank
    entering MPI_Comm_size
    leaving MPI_Comm_size
    leaving MPI_Ibarrier

       which shows that MPI_Comm_rank and MPI_Comm_size are invoked
    within MPI_Ibarrier.

       I looked into ompi/mpi/ibarrier.c and
    ./ompi/mpi/c/profile/pibarrier.c but it wasn't evident to me what
    might be wrong.

       Can anyone check this? And if this could also occur to other MPI3
    immediate collectives (MPI_Ireduce, MPI_Iallreduce, MPI_Igather, ...).

    Thank you!



    WARNING / LEGAL TEXT: This message is intended only for the use of the
    individual or entity to which it is addressed and may contain
    information which is privileged, confidential, proprietary, or exempt
    from disclosure under applicable law. If you are not the intended
    recipient or the person responsible for delivering the message to the
    intended recipient, you are strictly prohibited from disclosing,
    distributing, copying, or in any way using this message. If you have
    received this communication in error, please notify the sender and
    destroy and delete any copies you may have received.

    http://www.bsc.es/disclaimer



_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/11/28011.php


WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain
information which is privileged, confidential, proprietary, or exempt
from disclosure under applicable law. If you are not the intended
recipient or the person responsible for delivering the message to the
intended recipient, you are strictly prohibited from disclosing,
distributing, copying, or in any way using this message. If you have
received this communication in error, please notify the sender and
destroy and delete any copies you may have received.

http://www.bsc.es/disclaimer

Reply via email to