Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-29 Thread Sebastian Rettenberger
Hi, thanks, the patch works for me. I will do some further tests and report back if I find another problem. Best regards, Sebastian On 03/25/2016 01:58 AM, Gilles Gouaillardet wrote: Sebastian, at first glance, the global lock in romio glue is not necessary. feel free to give the attached

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Gilles Gouaillardet
Sebastian, at first glance, the global lock in romio glue is not necessary. feel free to give the attached patch a try (it works with your example, and i made no further testing) Cheers, Gilles On 3/25/2016 9:26 AM, Gilles Gouaillardet wrote: Sebastian, thanks for the info. bottom line, t

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Gilles Gouaillardet
Sebastian, thanks for the info. bottom line, the global lock is in the OpenMPI glue for ROMIO. i will check what kind of locking (if any) is done in mpich Cheers, Gilles On 3/24/2016 11:30 PM, Sebastian Rettenberger wrote: Hi, I tested this on my desktop machine. Thus, one node, two tasks.

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Jeff Hammond
> On Mar 24, 2016, at 4:51 PM, Gilles Gouaillardet wrote: > > Jeff, > > from mpi 3.1 page 217 > >> Finally, in multithreaded implementations, one can have more than one, >> concurrently >> executing, collective communication call at a process. In these situations, >> it is the user's respon

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Gilles Gouaillardet
Jeff, from mpi 3.1 page 217 Finally, in multithreaded implementations, one can have more than one, concurrently executing, collective communication call at a process. In these situations, it is the user's responsibility to ensure that the same communicator is not used concurrently by two di e

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Jeff Hammond
On Thursday, March 24, 2016, Sebastian Rettenberger wrote: > Hi, > > I tried to run the attached program with OpenMPI. It works well with MPICH > and Intel MPI but I get a deadlock when using OpenMPI. > I am using OpenMPI 1.10.0 with support for MPI_THREAD_MULTIPLE. > > It seems like ROMIO uses g

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Sebastian Rettenberger
Hi, I tested this on my desktop machine. Thus, one node, two tasks. It deadlock appears on the local file system and on the nfs mount. The MPICH version I tested was 3.2. However, as far as I know, locking is part of the MPI library and not ROMIO. Best regards, Sebastian On 03/24/2016 03:19 P

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Gilles Gouaillardet
Sebastian, in openmpi 1.10, the default io component is romio from mpich 3.0.4. how many tasks, how many nodes and which file system are you running on ? Cheers, Gilles On Thursday, March 24, 2016, Sebastian Rettenberger wrote: > Hi, > > I tried to run the attached program with OpenMPI. It w

[OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Sebastian Rettenberger
Hi, I tried to run the attached program with OpenMPI. It works well with MPICH and Intel MPI but I get a deadlock when using OpenMPI. I am using OpenMPI 1.10.0 with support for MPI_THREAD_MULTIPLE. It seems like ROMIO uses global locks in OpenMPI which is a problem if multiple threads want to