Russell Jones wrote:
> I am struggling to figure out how to do this. Any tips?
Create a QoS with GrpJobs=1 and assign it to the partition?
Dear slurm community,
I am quite new to slurm but I got a small slurm cluster with 3 compute nodes
running.
I can run simple jobs like `srun -N3 hostname` and I am trying now to run an
mpi helloworld app. My issue is that the job hangs and fails after a few
seconds.
# srun -N2 -n4 /scratch/hel
I think you could do this by clever use of a partition level QoS but I
don't have an obvious way of doing this.
-Paul Edmon-
On 3/22/2022 11:40 AM, Russell Jones wrote:
Hi all,
For various reasons, we need to limit a partition to being able to run
max 1 job at a time. Not 1 job per user, but
Hi all,
Thanks for your comments and suggestions.
Using --exclusive does not solve my issue because I need ntasks to get set
by Slurm to the MAX possible.
What I want is to request a fixed number of nodes with --nodes=N and
--ntasks=MAX , so that Slurm provides 2 nodes and sets then ntasks to the
Hi all,
For various reasons, we need to limit a partition to being able to run max
1 job at a time. Not 1 job per user, but 1 job total at a time, while
queuing any other jobs to run after this one is complete.
I am struggling to figure out how to do this. Any tips?
Thanks!
Requesting --exclusive and then using $SLURM_CPUS_ON_NODE to determine the
number of the tasks or threads to use inside the job script would be my
recommendation.
--Troy
-Original Message-
From: slurm-users On Behalf Of Tina
Friedrich
Sent: Tuesday, March 22, 2022 10:43 AM
To:
You are putting the cart before the horse here. While you can get access
to all the node using --exclusive, when you request cores, you will not
know if you have more. For example you request 80 cores and land on a 40
and a 48 with exclusive access. You would need to do some sort of
discovery t
Hi Richard,
...what's wrong with using '--exclusive'? I mean if you're wanting all
cores on the node anyway, wouldn't asking for it exclusively be pretty
much the same thing?
Tina
On 22/03/2022 14:29, Richard Ems wrote:
Hi all,
I am looking for an option to use all cores when submitting to
Hi all,
I am looking for an option to use all cores when submitting to
heterogeneous nodes.
In this case I have 2 partitions:
part1: #N1 nodes, each node has 40 cores
part2: #N2 nodes, each node has 48 cores
I want to submit to both partitions, requesting a number of nodes and then
set
--ntasks