Hello,

Brian Andrus via slurm-users
<slurm-users@lists.schedmd.com> writes:

> Unless you are using cgroups and constraints, there is no limit
> imposed.

[...]

> So your request did not exceed what slurm sees as available (1 cpu
> using 4GB), so it is happy to let your script run. I suspect if you
> look at the usage, you will see that 1 cpu spiked high while the
> others did nothing.

Thanks for the input.

I'm aware that without cgroups and constraints there is no real limit
imposed, but what I don't understand is why the first three submissions
below do get stopped by sbatch while the last one happily goes through?

>> ,----
>> | $ sbatch -N 1 -n 1 -c 76 -p short --mem-per-cpu=4000M test.batch
>> | sbatch: error: Batch job submission failed: Memory required by task is not 
>> available
>> |
>> | $ sbatch -N 1 -n 76 -c 1 -p short --mem-per-cpu=4000M test.batch
>> | sbatch: error: Batch job submission failed: Memory required by task is not 
>> available
>> |
>> | $ sbatch -n 1 -c 76 -p short --mem-per-cpu=4000M test.batch
>> | sbatch: error: Batch job submission failed: Memory required by task is not 
>> available
>> `----

>> ,----
>> | $ sbatch -n 76 -c 1 -p short --mem-per-cpu=4000M test.batch
>> | Submitted batch job 133982
>> `----

Cheers,
-- 
Ángel de Vicente  
 Research Software Engineer (Supercomputing and BigData)
 Instituto de Astrofísica de Canarias (https://www.iac.es/en)


-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to