Quoting gareth.willi...@csiro.au:
Thanks Moe,

The document and advice helps and I can see options but what is best is still unclear so I'll ask two more questions.

1) Is using sbatch with scripts and srun within the scripts 'best practice' (or 'a' best practice)? Or are there sites that just use srun? Is there specific documentation on running srun nested in sbatch? The mc_support.html page only refers to srun options.

There are sites that use mostly sbatch, others that use mostly salloc and/or srun. I would note that for longer running jobs, jobs submitted using sbatch can be requeued and restarted, while jobs submitted by salloc/srun can not be.

The options for srun, sbatch, and salloc are almost identical with respect to specification of a job's allocation requirements.


2) We currently have a few unrelated usage patterns where jobs request multiple nodes but only some of the cores (perhaps to match jobs that they used on our previous cluster configuration). How would you deal with that case where --exclusive is not necessarily appropriate? A big stick might be an option (and advice to use whole nodes) though the users are in different cities so it might have to be a virtual stick.


Perhaps the salloc/sbatch/srun options: --cpus-per-task and/or --ntasks-per-node

Why do they need only a few cores, but multiple nodes?
If done to get all of the memory on a node, perhaps your system should be configured to allocate and manage memory. Some relevant slurm.conf parameters are: SelectParameters=CR_CORE_MEM, MaxMemPerCPU=# and DefMemPerCPU=#. See the slurm.conf man page for more information:
http://slurm.schedmd.com/slurm.conf.html

Gareth

BTW. --ntasks-per-node=1 was not needed in your advice as it was the default. However, in that case to use srun and use all the cores, extra options were needed.

I know that, but wanted to provide you with a more general solution.


-----Original Message-----
From: Moe Jette [mailto:je...@schedmd.com]
Sent: Tuesday, 3 March 2015 3:42 AM
To: slurm-dev
Subject: [slurm-dev] Re: mixing mpi and per node tasks


Use the "--exclusive" option to always get whole node allocations:

$ sbatch --exclusive -N 3 my.bash

I would use the "--ntasks-per-node=1" option to control the task count
per node:

srun --ntasks-per-node=1 my.app

I would also recommend this document:
http://slurm.schedmd.com/mc_support.html

Quoting gareth.willi...@csiro.au:

> We have a cluster with dual socket nodes with 10-core cpus (ht off)
> and we share nodes with SelectType=select/cons_res.  Before (or
> after) running an MPI task, I'd like to run some pre (and post)
> processing tasks, one per node but am having trouble finding
> documentation for how to do this.  I was expecting to submit a jobs
> with sbatch with --nodes=N --tasks-per-node=20 where N is an integer
> to get multiple whole nodes then run srun --tasks-per-node=1 for the
> per node tasks but this does not work (I get one task for each core).
>
> I'd also like any solution to work with hybrid mpi/openmp with one
> openmp task per node or per socket.
>
> Thanks,
>
> Gareth


--
Morris "Moe" Jette
CTO, SchedMD LLC
Commercial Slurm Development and Support


--
Morris "Moe" Jette
CTO, SchedMD LLC
Commercial Slurm Development and Support

Reply via email to