Dear all,

we develop an instrumentation package based on PMPI and we've observed that PMPI_Ibarrier and PMPI_Ibcast invoke regular MPI_Comm_size and MPI_Comm_rank instead to the PMPI symbols (i.e. PMPI_Comm_size and PMPI_Comm_rank) in OpenMPI 1.10.0.

I have attached simple example that demonstrates it when using OpenMPI 1.10.0. The example creates a library (libinstrument) that hooks MPI_Comm_size, MPI_Comm_rank and MPI_Ibarrier. Then, there's a single MPI application that executes MPI_Ibarrier and waits for it. The result when combining this binary with the instrumentation library is the following:

# ~/aplic/openmpi/1.10.0/bin/mpirun -np 1 ./main
entering MPI_Ibarrier
entering MPI_Comm_rank
leaving MPI_Comm_rank
entering MPI_Comm_size
leaving MPI_Comm_size
leaving MPI_Ibarrier

which shows that MPI_Comm_rank and MPI_Comm_size are invoked within MPI_Ibarrier.

I looked into ompi/mpi/ibarrier.c and ./ompi/mpi/c/profile/pibarrier.c but it wasn't evident to me what might be wrong.

Can anyone check this? And if this could also occur to other MPI3 immediate collectives (MPI_Ireduce, MPI_Iallreduce, MPI_Igather, ...).

Thank you!



WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain
information which is privileged, confidential, proprietary, or exempt
from disclosure under applicable law. If you are not the intended
recipient or the person responsible for delivering the message to the
intended recipient, you are strictly prohibited from disclosing,
distributing, copying, or in any way using this message. If you have
received this communication in error, please notify the sender and
destroy and delete any copies you may have received.

http://www.bsc.es/disclaimer

Attachment: mpi-ibarrier.tar
Description: Unix tar archive

Reply via email to