Have you tried adding the dependency at submit time?
sbatch --dependency=singleton fakejob.sh
Brian Andrus
On 8/21/2019 1:51 PM, Jarno van der Kolk wrote:
Hi,
I am helping a researcher who encountered an unexpected behaviour with dependencies. He uses both
"singleton" and "after". The minim
Hi,
I am helping a researcher who encountered an unexpected behaviour with
dependencies. He uses both "singleton" and "after". The minimal working example
is as follows:
$ sbatch --hold fakejob.sh
Submitted batch job 25909273
$ sbatch --hold fakejob.sh
Submitted batch job 25909274
$ sbatch --ho
Marcus, maybe you can try playing with --mem instead? We recommend our users to
use --mem instead of --mem-per-cpu/task as it It makes it easier for users to
request the right amount of memory for the job. --mem is the amount of memory
for the whole job. This way, there is no multiplying of memo
Hey David,
Which distro? Which kernel version? Which systemd version? Which SLURM
version?
Based on some paths in your varialbles, I'm guessing Ubuntu distro with
Debian SLURM packages?
Regards,
Alex
On Wed, Aug 21, 2019 at 5:24 AM David da Silva Pires <
david.pi...@butantan.gov.br> wrote:
>
PARTLY-CLOUDY MEETING ANNOUNCEMENT
* One day meeting focused on the IT Infrastructure Challenges involved in
supporting Research Activities as they slosh between On-Prem and Cloud
* Hosted in Seattle, WA USA
* https://partly-cloudy.fredhutch.org
DESCRIPTION
* The research co
Hello Florian,
unless the proposed order of job execution needs to be adhered to at all times,
it might be easier and fairer to use the fairshare mechanism.
As the name suggests, it was created to provide each user (or account) with a
fair share of ressources. It regards previous computation tim
Hi supers.
I am configuring a server with slurm/cgroups. This server will be the
unique slurm node, so it is the head and the compute node at the same time.
In order to force users to submit slurm jobs instead of running the
processes directly on the server, I would like to use cgroups to isolate
Hi Folks,
We have a simple small slurm cluster set up to facilitate a fair usage of
the computing resources in our group. Simple in the sense that users only
run exclusive jobs on single nodes so far. For fairness, we have set
MaxSubmitJobsPerUser=2
MaxJobsPerUser=2
I would however li
Dear Slurm users,
It is very useful to view Slurm a user's resource limits and current
usage. For example, jobs may be blocked because some resource limit gets
exceeded, and it is important to analyze why this occurs.
Several Slurm commands such as sshare and sacctmgr can print a number of
u