On Fri, 23 Nov 2018 09:17:00 +0100
Lothar Brendel wrote:
[...]
> looking into orte/mca/ras/slurm/ras_slurm_module.c, I find that while
> orte_ras_slurm_allocate() reads the value of SLURM_CPUS_PER_TASK into its
> local variable cpus_per_task, it doesn't use it anywhere. Rather, the number
> o
> Couple of comments. Your original cmd line:
>
> >> srun -n 2 mpirun MPI-hellow
>
> tells srun to launch two copies of mpirun, each of which is to run as many
> processes as there are slots assigned to the allocation. srun will get an
> allocation of two slots, and so you?ll get two concurre
Couple of comments. Your original cmd line:
>> srun -n 2 mpirun MPI-hellow
tells srun to launch two copies of mpirun, each of which is to run as many
processes as there are slots assigned to the allocation. srun will get an
allocation of two slots, and so you’ll get two concurrent MPI jobs, e
Lothar,
it seems you did not configure Open MPI with --with-pmi=
If SLURM was built with PMIx support, then an other option is to use that.
First, srun --mpi=list will show you the list of available MPI
modules, and then you could
srun --mpi=pmix_v2 ... MPI_Hellow
If you believe that should be th
Hi guys,
I've always been somewhat at a loss regarding slurm's idea about tasks vs.
jobs. That didn't cause any problems, though, until passing to OpenMPI2 (2.0.2
that is, with slurm 16.05.9).
Running http://mpitutorial.com/tutorials/mpi-hello-world as an example with just
srun -n 2 MP