On 3/5/20 9:22 AM, Luis Huang wrote:
We would like to block certain nodes from accepting interactive jobs. Is
this possible on slurm?
My suggestion would be to make a partition for interactive jobs that
only contains the nodes that you want to run them and then use the
submit filter to direc
We would like to block certain nodes from accepting interactive jobs. Is this
possible on slurm?
Thanks,
Luis
This message is for the recipient's use only, and may contain confidential,
privileged or protected information. Any unauthorized use or dissemination
Hi Mike,
Thanks for the info.
.
Yes, Slurm 19.05 works with the "#SBATCH packjob".
- Chansup
On Thu, Mar 5, 2020 at 10:40 AM Renfro, Michael wrote:
> I’m going to guess the job directive changed between earlier releases and
> 20.02. An version of the page from last year [1] has no mention of h
Hi Marcus,
see below for the request info
scontrol show config | grep SelectTypeParameters
SelectTypeParameters =
CR_CORE_MEMORY,CR_ONE_TASK_PER_CORE,CR_CORE_DEFAULT_DIST_BLOCK,CR_PACK_NODES
But I would first like to see, what
sbatch -vvv jobscript
outputs first.
salloc: defined option
I’m going to guess the job directive changed between earlier releases and
20.02. An version of the page from last year [1] has no mention of hetjob, and
uses packjob instead.
On a related note, is there a canonical location for older versions of Slurm
documentation? My local man pages are alway
We have a shared gres.conf that includes node names, which should have the
flexibility to specify node-specific settings for GPUs:
=
NodeName=gpunode00[1-4] Name=gpu Type=k80 File=/dev/nvidia0 COREs=0-7
NodeName=gpunode00[1-4] Name=gpu Type=k80 File=/dev/nvidia1 COREs=8-15
=
See the th
When configuring a slurm cluster you need to have a copy of the
configuration file slurm.conf on all nodes. These copies are identical. In
the situation where you need to use GPUs in your cluster you have an
additional configuration file that you need to have on all nodes. This is
the gres.conf. My
Hi Alexander,
could you please do a
scontrol show config | grep SelectTypeParameters
and tell us the result?
In fact, for SLURM a CPU is everytimes a CPU, nonetheless, if a thread
(with HT) or a core is meant(without HT).
The question is moreover, why SLURM thinks, such a node is not availa
Hi Steffen,
On Wed, Mar 04, 2020 at 08:54:09AM +0100, Steffen Grunewald wrote:
> is there anyone out there, running Slurm on a Debian Stretch platform?
I have used slurm on stretch in production for several years, and I'm
aware of sites with thousand of nodes that have been using it when
stretch
Hi Steffen,
We are using Slurm on Debian Stretch at SURFsara on our LISA cluster.
We've been using the Debian Slurm (
https://salsa.debian.org/hpc-team/slurm-wlm) with a couple of patches,
although we're looking into a different option now.
Anyway, the daemons probably won't start because they'r
10 matches
Mail list logo