Re: [OMPI users] file/process write speed is not scalable

2020-04-14 Thread Patrick Bégou via users
Hi David, could you specify which version of OpenMPI you are using ? I've also some parallel I/O trouble with one code but still have not investigated. Thanks Patrick Le 13/04/2020 à 17:11, Dong-In Kang via users a écrit : > >  Thank you for your suggestion. > I am more concerned about the poor

Re: [OMPI users] file/process write speed is not scalable

2020-04-14 Thread Dong-In Kang via users
I'm using OpenMPI v.4.0.2. Is your problem similar to mine? Thanks, David On Tue, Apr 14, 2020 at 7:33 AM Patrick Bégou via users < users@lists.open-mpi.org> wrote: > Hi David, > > could you specify which version of OpenMPI you are using ? > I've also some parallel I/O trouble with one code but

Re: [OMPI users] Meaning of mpiexec error flags

2020-04-14 Thread Ralph Castain via users
Then those flags are correct. I suspect mpirun is executing on n006, yes? The "location verified" just means that the daemon of rank N reported back from the node we expected it to be on - Slurm and Cray sometimes renumber the ranks. Torque doesn't and so you should never see a problem. Since mp

Re: [OMPI users] Meaning of mpiexec error flags

2020-04-14 Thread Mccall, Kurt E. (MSFC-EV41) via users
Darn, I was hoping the flags would give a clue to the malfunction, which I’ve been trying to solve for weeks. MPI_Comm_spawn() correctly spawns a worker on the node the mpirun is executing on, but on other nodes it says the following: There are no allocated resources for the application:

[OMPI users] Hwlock library problem

2020-04-14 Thread フォンスポール J via users
I am attempting to build the latest stable version of openmpi (openmpi-4.0.3) on Mac OS 10.15.4 using the latest intel compilers fort, icc, iclc (19.1.1.216 20200306). I am using the configuration ./configure --prefix=/opt/openmpi CC=icc CXX=icpc F77=ifort FC=ifort --with-hwloc=internal --with

Re: [OMPI users] Hwlock library problem

2020-04-14 Thread Gilles Gouaillardet via users
Paul, this issue is likely the one already been reported at https://github.com/open-mpi/ompi/issues/7615 Several workarounds are documented, feel free to try some of them and report back (either on GitHub or this mailing list) Cheers, Gilles On Tue, Apr 14, 2020 at 11:18 PM フォンスポール J via users

[OMPI users] Inquiry about pml layer

2020-04-14 Thread Arturo Fernandez via users
Hello,I'm using CUDA-aware OMPIv4.0.3 with UCX to run some apps. Most of them have worked seamlessly, but one breaks and returns the error:memtype_cache.c:299  UCX  ERROR failed to set UCM memtype event handler: Unsupported operation--