> I searched the slurm.conf documentation, the mailing list and also the
> changelog, but found no reference to a matching parameter.
> Do anyone of you know the behavior and how to change it?
Hi,
This was an annoying change:
22.05.x RELEASE_NOTES:
-- srun will no longer read in SLURM_CPUS_PER_
Dear all,
we currently see a change of a default behavior of a job step.
On our old cluster (Slurm 20.11.9) a job step take all the resources of my
allocation.
rotscher@tauruslogin5:~> salloc --partition=interactive --nodes=1 --ntasks=1
--cpus-per-task=24 --hint=nomultithread
salloc: Pending job
A follow-up:
The loophole that I refer to in my previous message refers to the documentation
of the "Job Submit Plugin API":
https://slurm.schedmd.com/job_submit_plugins.html
The documentation for `job_submit` claims "[t]his function is called by the
slurmctld daemon with the job submission pa
We have a set of idiosyncratic requirements imposed on us:
1. There are on the order of 10^3 different budget codes. Maybe even 10^4 once
this thing gets cooking. That list will change a little bit every day, and may
change a lot at certain times of the year.
2. There are on the order of 10^2 di
Hi,
Has anyone come up with a good way of restricting the total number of
individual jobs while still allowing more jobs to be submitted with job
arrays?
More and more of our users seem to think it is a good idea to write a
program which runs a loop over 'sbatch' in order to start multiple jobs
w
Thank you for the responses.
In response to some of the suggestions, I would like to provide further details
on my specific use case. I am currently focused on exploring the concept of
malleable jobs, which possess the ability to adapt their computing resources
during runtime.
To tackle the M