Sebastian,
at first glance, the global lock in romio glue is not necessary.
feel free to give the attached patch a try
(it works with your example, and i made no further testing)
Cheers,
Gilles
On 3/25/2016 9:26 AM, Gilles Gouaillardet wrote:
Sebastian,
thanks for the info.
bottom line, t
Sebastian,
thanks for the info.
bottom line, the global lock is in the OpenMPI glue for ROMIO.
i will check what kind of locking (if any) is done in mpich
Cheers,
Gilles
On 3/24/2016 11:30 PM, Sebastian Rettenberger wrote:
Hi,
I tested this on my desktop machine. Thus, one node, two tasks.
> On Mar 24, 2016, at 4:51 PM, Gilles Gouaillardet wrote:
>
> Jeff,
>
> from mpi 3.1 page 217
>
>> Finally, in multithreaded implementations, one can have more than one,
>> concurrently
>> executing, collective communication call at a process. In these situations,
>> it is the user's respon
Jeff,
from mpi 3.1 page 217
Finally, in multithreaded implementations, one can have more than one,
concurrently
executing, collective communication call at a process. In these
situations, it is the user's responsibility
to ensure that the same communicator is not used concurrently by two
die
On Thursday, March 24, 2016, Sebastian Rettenberger
wrote:
> Hi,
>
> I tried to run the attached program with OpenMPI. It works well with MPICH
> and Intel MPI but I get a deadlock when using OpenMPI.
> I am using OpenMPI 1.10.0 with support for MPI_THREAD_MULTIPLE.
>
> It seems like ROMIO uses g
Hi Elie
Besides Gilles' and Thomas' suggestions:
1) Do you have any file system in your cluster head node that is an
NFS export, and presumably mounted on the compute nodes?
If you do, that would be the best place to install the Intel compiler.
This would make it available on the compute nodes,
Hi,
I tested this on my desktop machine. Thus, one node, two tasks.
It deadlock appears on the local file system and on the nfs mount.
The MPICH version I tested was 3.2.
However, as far as I know, locking is part of the MPI library and not ROMIO.
Best regards,
Sebastian
On 03/24/2016 03:19 P
Sebastian,
in openmpi 1.10, the default io component is romio from mpich 3.0.4.
how many tasks, how many nodes and which file system are you running on ?
Cheers,
Gilles
On Thursday, March 24, 2016, Sebastian Rettenberger
wrote:
> Hi,
>
> I tried to run the attached program with OpenMPI. It w
Hi,
I tried to run the attached program with OpenMPI. It works well with
MPICH and Intel MPI but I get a deadlock when using OpenMPI.
I am using OpenMPI 1.10.0 with support for MPI_THREAD_MULTIPLE.
It seems like ROMIO uses global locks in OpenMPI which is a problem if
multiple threads want to
On 3/24/2016 12:01 AM, Gilles Gouaillardet wrote:
> Elio,
>
> usually, /opt is a local filesystem, so it is possible /opt/intel is
> only available on your login nodes.
>
> your best option is to ask your sysadmin where the mkl libs are on the
> compute nodes, and/or how to use mkl in your jobs.
Hello,
On 03/24/2016 03:22 AM, Gilles Gouaillardet wrote:
it seems /opt/intel/composer_xe_2013_sp1/bin/compilervars.sh is only available
on your login/frontend nodes,
but not on your compute nodes.
you might be luckier with
/opt/intel/mkl/bin/mklvars.sh
an other option is to
ldd /home/emoujaes/
Elio,
usually, /opt is a local filesystem, so it is possible /opt/intel is
only available on your login nodes.
your best option is to ask your sysadmin where the mkl libs are on the
compute nodes, and/or how to use mkl in your jobs.
feel free to submit a dumb pbs script
ls -l /opt
ls -l /op
12 matches
Mail list logo