Hi guys,
I've always been somewhat at a loss regarding slurm's idea about tasks vs.
jobs. That didn't cause any problems, though, until passing to OpenMPI2 (2.0.2
that is, with slurm 16.05.9).
Running http://mpitutorial.com/tutorials/mpi-hello-world as an example with just
srun -n 2 MP
Lothar,
it seems you did not configure Open MPI with --with-pmi=
If SLURM was built with PMIx support, then an other option is to use that.
First, srun --mpi=list will show you the list of available MPI
modules, and then you could
srun --mpi=pmix_v2 ... MPI_Hellow
If you believe that should be th
Couple of comments. Your original cmd line:
>> srun -n 2 mpirun MPI-hellow
tells srun to launch two copies of mpirun, each of which is to run as many
processes as there are slots assigned to the allocation. srun will get an
allocation of two slots, and so you’ll get two concurrent MPI jobs, e