Hi,
thanks, the patch works for me. I will do some further tests and report
back if I find another problem.
Best regards,
Sebastian
On 03/25/2016 01:58 AM, Gilles Gouaillardet wrote:
Sebastian,
at first glance, the global lock in romio glue is not necessary.
feel free to give the attached
Sebastian,
at first glance, the global lock in romio glue is not necessary.
feel free to give the attached patch a try
(it works with your example, and i made no further testing)
Cheers,
Gilles
On 3/25/2016 9:26 AM, Gilles Gouaillardet wrote:
Sebastian,
thanks for the info.
bottom line, t
Sebastian,
thanks for the info.
bottom line, the global lock is in the OpenMPI glue for ROMIO.
i will check what kind of locking (if any) is done in mpich
Cheers,
Gilles
On 3/24/2016 11:30 PM, Sebastian Rettenberger wrote:
Hi,
I tested this on my desktop machine. Thus, one node, two tasks.
> On Mar 24, 2016, at 4:51 PM, Gilles Gouaillardet wrote:
>
> Jeff,
>
> from mpi 3.1 page 217
>
>> Finally, in multithreaded implementations, one can have more than one,
>> concurrently
>> executing, collective communication call at a process. In these situations,
>> it is the user's respon
Jeff,
from mpi 3.1 page 217
Finally, in multithreaded implementations, one can have more than one,
concurrently
executing, collective communication call at a process. In these
situations, it is the user's responsibility
to ensure that the same communicator is not used concurrently by two
die
On Thursday, March 24, 2016, Sebastian Rettenberger
wrote:
> Hi,
>
> I tried to run the attached program with OpenMPI. It works well with MPICH
> and Intel MPI but I get a deadlock when using OpenMPI.
> I am using OpenMPI 1.10.0 with support for MPI_THREAD_MULTIPLE.
>
> It seems like ROMIO uses g
Hi,
I tested this on my desktop machine. Thus, one node, two tasks.
It deadlock appears on the local file system and on the nfs mount.
The MPICH version I tested was 3.2.
However, as far as I know, locking is part of the MPI library and not ROMIO.
Best regards,
Sebastian
On 03/24/2016 03:19 P
Sebastian,
in openmpi 1.10, the default io component is romio from mpich 3.0.4.
how many tasks, how many nodes and which file system are you running on ?
Cheers,
Gilles
On Thursday, March 24, 2016, Sebastian Rettenberger
wrote:
> Hi,
>
> I tried to run the attached program with OpenMPI. It w
Hi,
I tried to run the attached program with OpenMPI. It works well with
MPICH and Intel MPI but I get a deadlock when using OpenMPI.
I am using OpenMPI 1.10.0 with support for MPI_THREAD_MULTIPLE.
It seems like ROMIO uses global locks in OpenMPI which is a problem if
multiple threads want to