The following (from what you posted earlier):
$ srun --mpi=list
srun: MPI types are...
srun: none
srun: pmix_v3
srun: pmi2
srun: openmpi
srun: pmix
would indicate that Slurm was built against a PMIx v3.x release. Using OMPI
v4.0.3 with pmix=internal should be just fine so long as you set --mpi=p
No, and I fear that may be the problem. When we built OpenMPI, we did
--with-pmix=internal. Not sure how Slurm was built, since my coworker
built it.
Prentice
On 4/28/20 2:07 AM, Daniel Letai via users wrote:
I know it's not supposed to matter, but have you tried building both
ompi and sl
Thanks for the suggestion. We are using an NFSRoot OS image on all the
nodes, so all the nodes have to be running the same version of OMPI.
On 4/27/20 10:58 AM, Riebs, Andy wrote:
Y’know, a quick check on versions and PATHs might be a good idea here.
I suggest something like
$ srun -N3 om
Jim,
You can
mpirun --use-hwthread-cpus ...
to have one slot = one hyperthread (default is slot = core)
Note you always have the opportunity to
mpirun --oversubscribe ...
Cheers,
Gilles
- Original Message -
I've just compiled my own version of ompi on Ubuntu 20.04 linux fo
I've just compiled my own version of ompi on Ubuntu 20.04 linux for use
with R and Rmpi etc. It works ok but the maximum number of slaves I can
get is number of cores (20). I used to be able to get the number of
hyperthreaded slots (40). What is the most likely cause of my new problem
and/or how