Literal job arrays are built into Slurm: 
https://slurm.schedmd.com/job_array.html

yes, and the best way to describe these are "job generators".
that is, you submit one and it sits in the pending queue, while the array elements kind of "bud" off the parent job. each of the array jobs is a full-fledged job (full-cost, all the accounting
and setup/teardown overhead, just not submission overhead or stuffing
the pending queue.)

I don't think this is what the OP is looking for.

Alternatively, if you wanted to allocate a set of CPUs for a parallel task, and 
then run a set of single-CPU tasks in the same job, something like:

 #!/bin/bash
 #SBATCH --ntasks=30
 srun --ntasks=${SLURM_NTASKS} hostname

I think this is *exactly* what the OP is looking for! this runs each element of work as a step within a single allocation. you get some accounting, but are not bothering the scheduler during the job.

an alternative would be to run something like GNU Parallel within the job.

regards, mark hahn.
--
operator may differ from spokesperson.              h...@mcmaster.ca

Reply via email to