thank you Michael for the feedback, my scenario is the following: I want to run a job array of (lets say) 30 jobs. So I setted the slurm input as follows:

#SBATCH --array=1-104%30
#SBATCH --ntasks=1

however only 4 jobs within the array are launched at a time due to the allowed max number of jobs as setted in the slurm configuration (4). As a workaround to the issued, the sysadmin suggested me to request the resources, and afterwards distribute the resources asigned into a multiple set of single CPU task. I believe that with the solution you mentioned only 30 (out of the 104) jobs will be finished?

thanks

Alfredo


El 19/12/2018 a las 11:15, Renfro, Michael escribió:
Literal job arrays are built into Slurm: 
https://slurm.schedmd.com/job_array.html

Alternatively, if you wanted to allocate a set of CPUs for a parallel task, and 
then run a set of single-CPU tasks in the same job, something like:

   #!/bin/bash
   #SBATCH --ntasks=30
   srun --ntasks=${SLURM_NTASKS} hostname

is one way of doing it. If that’s not what you’re looking for, some other 
details would be needed.


Reply via email to