Dear SLURM experts,
I'm having trouble understanding an issue we have with slurm
17.11.10.
In one partition "all", we have some nodes with hypterthreading
and
some without, leading to 56 and 28 "cores", respectively.
In the same partition, we have some nodes with 256GM and some with
128GB R
Dear SLURM experts,
we have a cluster of 56 nodes with 28 cores each. Is it possible
to
limit the number of jobs of a certain name which concurrently run
on
one node, without blocking the node for other jobs?
For example, when I do
for filename in runtimes/*/jobscript.sh; do
sbatch -J
riable. You can
refer to this new name (in code/child processes after the
change) using
$SLURM_JOB_NAME.
- Doesn't update the Slurm controller job name.
Best regards,
Jessica Nettelblad, UPPMAX
On Thu, Mar 22, 2018 at 10:16 PM, Andreas Hilboll <
hilboll+sl...@uni-bremen.de> wrote:
Hi,
Hi,
I'd like to be able to set the SLURM_JOB_NAME from within the
script I'm submitting to `sbatch`. So, e.g., with the script
`myscript.sh`,
#!/bin/bash
export SLURM_JOB_NAME='myname'
sleep 120
and then `sbatch myscript.sh`, I'd like the job's name to be
'myname'.
Is this someh