I have also this problem on servers I'm benching at DELL's lab with
OpenMPI-4.0.3. I've tried a new build of OpenMPI with "--with-pmi2". No
change.
Finally my work around in the slurm script was to launch my code with
mpirun. As mpirun was only finding one slot per nodes I have used
"--oversubscri
It is entirely possible that the PMI2 support in OMPI v4 is broken - I doubt it
is used or tested very much as pretty much everyone has moved to PMIx. In fact,
we completely dropped PMI-1 and PMI-2 from OMPI v5 for that reason.
I would suggest building Slurm with PMIx v3.1.5
(https://github.com
Claire,
> Is it possible to use the one-sided communication without combining
it with synchronization calls?
What exactly do you mean by "synchronization calls"? MPI_Win_fence is
indeed synchronizing (basically flush+barrier) but MPI_Win_lock (and the
passive target synchronization interface