Re: [slurm-users] Submit sbatch to multiple partitions

2023-04-17 Thread Ward Poelmans
Hi Xaver, On 17/04/2023 11:36, Xaver Stiensmeier wrote: let's say I want to submit a large batch job that should run on 8 nodes. I have two partitions, each holding 4 nodes. Slurm will now tell me that "Requested node configuration is not available". However, my desired output would be that sl

Re: [slurm-users] Multiple default partitions

2023-04-17 Thread Diego Zuccato
I used to set SBATCH_PARTITION=list,of,partitions in /etc/environment. But it seems to override user choice, so users won't be able to specify a partition for their jobs :( Diego Il 17/04/2023 11:12, Xaver Stiensmeier ha scritto: Dear slurm-users list, is it possible to somehow have two defa

Re: [slurm-users] [EXT] Submit sbatch to multiple partitions

2023-04-17 Thread Bjørn-Helge Mevik
"Ozeryan, Vladimir" writes: > You should be able to specify both partitions in your sbatch submission > script, unless there is some other configuration preventing this. But Slurm will still only run the job in *one* of the partitions - it will never "pool" two partitions and let the job run on

Re: [slurm-users] Submit sbatch to multiple partitions

2023-04-17 Thread Ole Holm Nielsen
On 4/17/23 11:36, Xaver Stiensmeier wrote: let's say I want to submit a large batch job that should run on 8 nodes. I have two partitions, each holding 4 nodes. Slurm will now tell me that "Requested node configuration is not available". However, my desired output would be that slurm makes use of

Re: [slurm-users] [EXT] Submit sbatch to multiple partitions

2023-04-17 Thread Ozeryan, Vladimir
You should be able to specify both partitions in your sbatch submission script, unless there is some other configuration preventing this. -Original Message- From: slurm-users On Behalf Of Xaver Stiensmeier Sent: Monday, April 17, 2023 5:37 AM To: slurm-users@lists.schedmd.com Subject: [

[slurm-users] Submit sbatch to multiple partitions

2023-04-17 Thread Xaver Stiensmeier
Dear slurm-users list, let's say I want to submit a large batch job that should run on 8 nodes. I have two partitions, each holding 4 nodes. Slurm will now tell me that "Requested node configuration is not available". However, my desired output would be that slurm makes use of both partitions and

Re: [slurm-users] Multiple default partitions

2023-04-17 Thread Xaver Stiensmeier
I found a solution that works for me, but it doesn't really answer the question: It's the option https://slurm.schedmd.com/slurm.conf.html#OPT_all_partitions for JobSubmitPlugins. It works for me, because all partitions are default in my case, but it doesn't /really/ answer my question as my ques

[slurm-users] Multiple default partitions

2023-04-17 Thread Xaver Stiensmeier
Dear slurm-users list, is it possible to somehow have two default partitions? In the best case in a way that slurm schedules to partition1 on default and only to partition2 when partition1 can't handle the job right now. Best regards, Xaver Stiensmeier

Re: [slurm-users] Slurmdbd High Availability

2023-04-17 Thread Shaghuf Rahman
Hi, Thanks everyone who shared the information with me. Really appreciate it. Thanks, Shaghuf Rahman On Sun, 16 Apr 2023 at 02:21, Daniel Letai wrote: > My go to solution is setting up Galera cluster using 2 slurmdbd servers > (each pointing to it's local db) and a 3rd quorum server. It's fair