Re: [slurm-users] Slurm configuration, Weight Parameter

2019-12-05 Thread Sarlo, Jeffrey S
We have weights and priority/multifactor. Jeff From: Sistemas NLHPC [mailto:siste...@nlhpc.cl] Sent: Thursday, December 05, 2019 12:01 PM To: Sarlo, Jeffrey S; Slurm User Community List Subject: Re: [slurm-users] Slurm configuration, Weight Parameter Thanks Jeff ! We upgrade slurm to 18.08.4

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-12-05 Thread Sistemas NLHPC
*On > Behalf Of *Sistemas NLHPC > *Sent:* Tuesday, December 03, 2019 12:33 PM > *To:* Slurm User Community List > *Subject:* Re: [slurm-users] Slurm configuration, Weight Parameter > > > > Hi Renfro > > > > I am testing this configuration, test configuration an

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-12-03 Thread Sistemas NLHPC
Hi Renfro I am testing this configuration, test configuration and as clean as possible: NodeName=devcn050 RealMemory=3007 Features=3007MB Weight=200 State=idle Sockets=2 CoresPerSocket=1 NodeName=devcn002 RealMemory=3007 Features=3007MB Weight=1 State=idle Sockets=2 CoresPerSocket=1 NodeNam

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-30 Thread Renfro, Michael
We’ve been using that weighting scheme for a year or so, and it works as expected. Not sure how Slurm would react to multiple NodeName=DEFAULT lines like you have, but here’s our node settings and a subset of our partition settings. In our environment, we’d often have lots of idle cores on GPU

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-29 Thread Sistemas NLHPC
Hi All, Thanks all for your posts Reading the documentation of Slurm and other sites like Niflheim https://wiki.fysik.dtu.dk/niflheim/Slurm_configuration#node-weight (Ole Holm Nielsen) the parameter "Weight" is to assign a value to the nodes, with this you can have priority in the nodes. But I ha

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-23 Thread Chris Samuel
On 23/11/19 9:14 am, Chris Samuel wrote: My gut instinct (and I've never tried this) is to make the 3GB nodes be in a separate partition that is guarded by AllowQos=3GB and have a QOS called "3GB" that uses MinTRESPerJob to require jobs to ask for more than 2GB of RAM to be allowed into the QO

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-23 Thread Chris Samuel
On 21/11/19 7:25 am, Sistemas NLHPC wrote: Currently we have two types of nodes, one with 3GB and another with 2GB of RAM, it is required that in nodes of 3 GB it is not allowed to execute tasks with less than 2GB, to avoid underutilization of resources. My gut instinct (and I've never tried

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-22 Thread Goetz, Patrick G
Can't you just set the usage priority to be higher for the 2GB machines? This way, if the requested memory is less than 2GB those machines will be used first, and larger jobs skip to the higher memory machines. On 11/21/19 9:44 AM, Jim Prewett wrote: > > Hi Sistemas, > > I could be mista

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-21 Thread Jim Prewett
Hi Sistemas, I could be mistaken, but I don't think there is a way to require jobs on the 3GB nodes to request more than 2GB! https://slurm.schedmd.com/slurm.conf.html states this: "Note that if a job allocation request can not be satisfied using the nodes with the lowest weight, the set o

[slurm-users] Slurm configuration, Weight Parameter

2019-11-21 Thread Sistemas NLHPC
Hi all, Currently we have two types of nodes, one with 3GB and another with 2GB of RAM, it is required that in nodes of 3 GB it is not allowed to execute tasks with less than 2GB, to avoid underutilization of resources. This, because we have nodes that can fulfill the condition of executing tasks