Re: [slurm-users] Custom Gres for SSD

2023-07-24 Thread Shunran Zhang
Hi Matthias, Thank you for your info. The prolog/epilog way of managing it does look quite promising. Indeed in my setup I only want one job per node per SSD-set. Our tasks that require the scratch space are more IO bound - we are more worried about the IO usage than the actual disk space us

Re: [slurm-users] Custom Gres for SSD

2023-07-24 Thread Matthias Loose
On 2023-07-24 09:50, Matthias Loose wrote: Hi Shunran, just read your question again. If you dont want users to share the SSD, like at all even if both have requested it you can basically skip the quota part of my awnser. If you really only want one user per SSD per node you should set the

Re: [slurm-users] Custom Gres for SSD

2023-07-24 Thread Matthias Loose
Hi Shunran, we do something very similar. I have nodes with 2 SSDs in a Raid1 mounted on /local. We defined a gres ressource just like you and called it local. We define the ressource in the gres.conf like this: # LOCAL NodeName=hpc-node[01-10] Name=local and add the ressource in counts

[slurm-users] Custom Gres for SSD

2023-07-23 Thread Shunran Zhang
Hi all, I am attempting to setup a gres to manage jobs that need a scratch space, but only a few of our computational nodes are equipped with SSD for such scratch space. Originally I setup a new partition for those IO-bound jobs, but it ended up that those jobs might be allocated to the same node