We don't fully support THREAD_MULTIPLE, and most definitely not when using IB. 
We are planning on extending that coverage in the 1.9 series


On Apr 25, 2014, at 2:22 PM, Markus Wittmann <markus.wittm...@fau.de> wrote:

> Hi everyone,
> 
> I'm using the current Open MPI 1.8.1 release and observe
> non-deterministic deadlocks and warnings from libevent when using
> MPI_THREAD_MULTIPLE. Open MPI has been configured with
> --enable-mpi-thread-multiple --with-tm --with-verbs (see attached
> config.log)
> 
> Attached is a sample application that spawns a thread for each process
> after MPI_Init_thread has been called. The thread then calls MPI_Recv
> which blocks until the matching MPI_Send is called just before
> MPI_Finalize is called in the main thread. (AFAIK MPICH uses such kind
> of facility to implement a progress thread) Meanwhile the main thread
> exchanges data with its right/left neighbor via ISend/IRecv.
> 
> I only see this, when the MPI processes run on separate nodes like in
> the following:
> 
> $ mpiexec -n 2 -map-by node ./test
> [0] isend/irecv.
> [0] progress thread...
> [0] waitall.
> [warn] opal_libevent2021_event_base_loop: reentrant invocation. Only one 
> event_base_loop can run on each event_base at once.
> [1] isend/irecv.
> [1] progress thread...
> [1] waitall.
> [warn] opal_libevent2021_event_base_loop: reentrant invocation. Only one 
> event_base_loop can run on each event_base at once.
> 
> <no further output...>
> 
> Can anybody confirm this?
> 
> Best regards,
> Markus
> 
> -- 
> Markus Wittmann, HPC Services
> Friedrich-Alexander-Universität Erlangen-Nürnberg
> Regionales Rechenzentrum Erlangen (RRZE)
> Martensstrasse 1, 91058 Erlangen, Germany
> http://www.rrze.fau.de/hpc/
> <info.tar.bz2><test.c>_______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to