Actually this is not Slurm versioning strictly speaking, this is openapi
versioning - the move from 0.0.38 to 0.0.39 also dropped this particular
endpoint.
You will notice that the same major Slurm version supports different API
versions.
On 28/08/2024 03:02:00, Chris Samuel via slurm-users
Hello,
Does anyone know why this is possible in slurm:
--constraint="[rack1*2&rack2*4]"
and this is not:
--constraint="[rack1*2|rack2*4]"
?
Thank you.
--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com
Hello,
I have a cluster with four Intel nodes (node[01-04], Feature=intel) and four
Amd nodes (node[05-08], Feature=amd).
# job file
#SBATCH --ntasks=3
#SBATCH --nodes=2,4
#SBATCH --constraint="[intel|amd]"
env | grep SLURM
# slurm.conf
PartitionName=DEFAULT MinNodes=1 MaxNodes=UNLIMITED
Hello,
What is meant here by "tracking"? What information are you looking to
gather and track?
I'd say the simplest answer is using sacct, but I am not sure how
federated/non-federated setups come into play while using it.
David
On Tue, Aug 27, 2024 at 6:23 AM Di Bernardini, Fabio via slurm-use
Thanks. I've made that fix.
-Paul Edmon-
On 8/28/24 5:42 PM, Davide DelVento wrote:
Thanks everybody once again and especially Paul: your job_summary
script was exactly what I needed, served on a golden plate. I just had
to modify/customize the date range and change the following line (I
can
Your --nodes line is incorrect:
*-N*,*--nodes*=[-/maxnodes/]|
Request that a minimum of/minnodes/nodes be allocated to this job. A
maximum node count may also be specified with/maxnodes/.
Looks like it ignored that and used ntasks with ntasks-per-node as 1,
giving you 3 nodes. Check your
Hi,
On sbatch's manpage there is this example for :
--nodes=1,5,9,13
so either one specifies [-maxnodes] OR .
I checked the logs, and there are no reported errors about wrong or ignored
options.
MG
From: Brian Andrus via slurm-users
Sent: Thursday, Augu
It looks to me that you requested 3 tasks spread across 2 to 4 nodes.
Realize --nodes is not targeting your nodes named 2 and 4, it is a count
of how many nodes to use. You only needed 3 tasks/cpus, so that is what
you were allocated and you have 1 cpu per node, so you get 3 (of up to
4) nodes.
I'm sorry, but I still don't get it.
Isn't --nodes=2,4 telling slurm to allocate 2 OR 4 nodes and nothing else?
So, if:
--nodes=2 allocates only two nodes
--nodes=4 allocates only four nodes
--nodes=1-2 allocates min one and max two nodes
--nodes=1-4 allocates min one and max four nodes