It isn't that simple. In some cases, THREAD_MULTIPLE works just fine. In some, 
it doesn't. Trying to devise logic that accurately detects when it does and 
doesn't work would be extremely difficult, and in many cases 
application-dependent. If we disable it for everyone, then those who can make 
it work get upset.

We don't like the situation either :-(

On Apr 28, 2014, at 8:03 AM, Jeffrey A Cummings <jeffrey.a.cummi...@aero.org> 
wrote:

> Wouldn't you save yourself work and your users confusion if you disabled 
> options that don't currently work? 
> 
> 
> Jeffrey A. Cummings
> Engineering Specialist
> Performance Modeling and Analysis Department
> Systems Analysis and Simulation Subdivision
> Systems Engineering Division
> Engineering and Technology Group
> The Aerospace Corporation
> 571-307-4220
> jeffrey.a.cummi...@aero.org 
> 
> 
> 
> From:        Ralph Castain <r...@open-mpi.org> 
> To:        Open MPI Users <us...@open-mpi.org>, 
> Date:        04/25/2014 05:40 PM 
> Subject:        Re: [OMPI users] Deadlocks and warnings from libevent when 
> using        MPI_THREAD_MULTIPLE 
> Sent by:        "users" <users-boun...@open-mpi.org> 
> 
> 
> 
> We don't fully support THREAD_MULTIPLE, and most definitely not when using 
> IB. We are planning on extending that coverage in the 1.9 series
> 
> 
> On Apr 25, 2014, at 2:22 PM, Markus Wittmann <markus.wittm...@fau.de> wrote:
> 
> > Hi everyone,
> > 
> > I'm using the current Open MPI 1.8.1 release and observe
> > non-deterministic deadlocks and warnings from libevent when using
> > MPI_THREAD_MULTIPLE. Open MPI has been configured with
> > --enable-mpi-thread-multiple --with-tm --with-verbs (see attached
> > config.log)
> > 
> > Attached is a sample application that spawns a thread for each process
> > after MPI_Init_thread has been called. The thread then calls MPI_Recv
> > which blocks until the matching MPI_Send is called just before
> > MPI_Finalize is called in the main thread. (AFAIK MPICH uses such kind
> > of facility to implement a progress thread) Meanwhile the main thread
> > exchanges data with its right/left neighbor via ISend/IRecv.
> > 
> > I only see this, when the MPI processes run on separate nodes like in
> > the following:
> > 
> > $ mpiexec -n 2 -map-by node ./test
> > [0] isend/irecv.
> > [0] progress thread...
> > [0] waitall.
> > [warn] opal_libevent2021_event_base_loop: reentrant invocation. Only one 
> > event_base_loop can run on each event_base at once.
> > [1] isend/irecv.
> > [1] progress thread...
> > [1] waitall.
> > [warn] opal_libevent2021_event_base_loop: reentrant invocation. Only one 
> > event_base_loop can run on each event_base at once.
> > 
> > <no further output...>
> > 
> > Can anybody confirm this?
> > 
> > Best regards,
> > Markus
> > 
> > -- 
> > Markus Wittmann, HPC Services
> > Friedrich-Alexander-Universität Erlangen-Nürnberg
> > Regionales Rechenzentrum Erlangen (RRZE)
> > Martensstrasse 1, 91058 Erlangen, Germany
> > http://www.rrze.fau.de/hpc/
> > <info.tar.bz2><test.c>_______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to