Hi David,
could you specify which version of OpenMPI you are using ?
I've also some parallel I/O trouble with one code but still have not
investigated.
Thanks
Patrick
Le 13/04/2020 à 17:11, Dong-In Kang via users a écrit :
>
> Thank you for your suggestion.
> I am more concerned about the poor
I'm using OpenMPI v.4.0.2.
Is your problem similar to mine?
Thanks,
David
On Tue, Apr 14, 2020 at 7:33 AM Patrick Bégou via users <
users@lists.open-mpi.org> wrote:
> Hi David,
>
> could you specify which version of OpenMPI you are using ?
> I've also some parallel I/O trouble with one code but
Then those flags are correct. I suspect mpirun is executing on n006, yes? The
"location verified" just means that the daemon of rank N reported back from the
node we expected it to be on - Slurm and Cray sometimes renumber the ranks.
Torque doesn't and so you should never see a problem. Since mp
Darn, I was hoping the flags would give a clue to the malfunction, which I’ve
been trying to solve for weeks. MPI_Comm_spawn() correctly spawns a worker on
the node the mpirun is executing on, but on other nodes it says the following:
There are no allocated resources for the application:
I am attempting to build the latest stable version of openmpi (openmpi-4.0.3)
on Mac OS 10.15.4 using the latest intel compilers fort, icc, iclc (19.1.1.216
20200306). I am using the configuration
./configure --prefix=/opt/openmpi CC=icc CXX=icpc F77=ifort FC=ifort
--with-hwloc=internal --with
Paul,
this issue is likely the one already been reported at
https://github.com/open-mpi/ompi/issues/7615
Several workarounds are documented, feel free to try some of them and
report back
(either on GitHub or this mailing list)
Cheers,
Gilles
On Tue, Apr 14, 2020 at 11:18 PM フォンスポール J via users
Hello,I'm using CUDA-aware OMPIv4.0.3 with UCX to run some apps. Most of them have worked seamlessly, but one breaks and returns the error:memtype_cache.c:299 UCX ERROR failed to set UCM memtype event handler: Unsupported operation--