Re: [slurm-users] Dependencies with singleton and after

2019-08-21 Thread Brian Andrus
Have you tried adding the dependency at submit time? sbatch --dependency=singleton fakejob.sh Brian Andrus On 8/21/2019 1:51 PM, Jarno van der Kolk wrote: Hi, I am helping a researcher who encountered an unexpected behaviour with dependencies. He uses both "singleton" and "after". The minim

[slurm-users] Dependencies with singleton and after

2019-08-21 Thread Jarno van der Kolk
Hi, I am helping a researcher who encountered an unexpected behaviour with dependencies. He uses both "singleton" and "after". The minimal working example is as follows: $ sbatch --hold fakejob.sh Submitted batch job 25909273 $ sbatch --hold fakejob.sh Submitted batch job 25909274 $ sbatch --ho

Re: [slurm-users] exclusive or not exclusive, that is the question

2019-08-21 Thread Christopher Benjamin Coffey
Marcus, maybe you can try playing with --mem instead? We recommend our users to use --mem instead of --mem-per-cpu/task as it It makes it easier for users to request the right amount of memory for the job. --mem is the amount of memory for the whole job. This way, there is no multiplying of memo

Re: [slurm-users] Fwd: Slurm/cgroups on a single head/compute node

2019-08-21 Thread Alex Chekholko
Hey David, Which distro? Which kernel version? Which systemd version? Which SLURM version? Based on some paths in your varialbles, I'm guessing Ubuntu distro with Debian SLURM packages? Regards, Alex On Wed, Aug 21, 2019 at 5:24 AM David da Silva Pires < david.pi...@butantan.gov.br> wrote: >

[slurm-users] Meeting Announcement / Partly Cloudy 2019 / October 18th

2019-08-21 Thread Stuart Kendrick
PARTLY-CLOUDY MEETING ANNOUNCEMENT * One day meeting focused on the IT Infrastructure Challenges involved in supporting Research Activities as they slosh between On-Prem and Cloud * Hosted in Seattle, WA USA * https://partly-cloudy.fredhutch.org DESCRIPTION * The research co

Re: [slurm-users] Holding back jobs over QOS limit

2019-08-21 Thread Lech Nieroda
Hello Florian, unless the proposed order of job execution needs to be adhered to at all times, it might be easier and fairer to use the fairshare mechanism. As the name suggests, it was created to provide each user (or account) with a fair share of ressources. It regards previous computation tim

[slurm-users] Fwd: Slurm/cgroups on a single head/compute node

2019-08-21 Thread David da Silva Pires
Hi supers. I am configuring a server with slurm/cgroups. This server will be the unique slurm node, so it is the head and the compute node at the same time. In order to force users to submit slurm jobs instead of running the processes directly on the server, I would like to use cgroups to isolate

[slurm-users] Holding back jobs over QOS limit

2019-08-21 Thread Jochheim, Florian
Hi Folks, We have a simple small slurm cluster set up to facilitate a fair usage of the computing resources in our group. Simple in the sense that users only run exclusive jobs on single nodes so far. For fairness, we have set MaxSubmitJobsPerUser=2 MaxJobsPerUser=2 I would however li

[slurm-users] ANNOUNCE: A new showuserlimits tool for printing Slurm user resource limits and usage

2019-08-21 Thread Ole Holm Nielsen
Dear Slurm users, It is very useful to view Slurm a user's resource limits and current usage. For example, jobs may be blocked because some resource limit gets exceeded, and it is important to analyze why this occurs. Several Slurm commands such as sshare and sacctmgr can print a number of u