understanding of what other people are doing with these settings might help
us to get this right!
For context, ours is a tier 3 cluster servicing mixed workloads.
Thanks.
Killian
--
Killian Murphy
Research and High Performance Computing Team Leader
Research Software Engineer
Information Services
ityDecayHalfLife=1-0
> > PriorityMaxAge=4-0
> >
> > The busier the cluster, the longer should the parameters be, so the user
> previous jobs will restrict the "future" ones more.
> > These should be adjusted based on the actual usage and impact to the
> users.
&g
submit.lua, through to jobs requesting 32 nodes. When we first started
> the service, 32 node jobs were typically taking in the region of 2 days to
> schedule -- recently queuing times have started to get out of hand. Our
> setup is essentially...
> > >>
> > >> Priorit
H --nodes=1
>
> #SBATCH --qos=high
>
>
>
> srun -n1 --gres=gpu:1 --exclusive --export=ALL bash -c
> "NV_GPU=$SLURM_JOB_GPUS nvidia-docker run --rm -e
> SLURM_JOB_ID=$SLURM_JOB_ID -e SLURM_OUTPUT=$SLURM_OUTPUT --name
> $SLURM_JOB_ID do_job.sh"
>
>
>
> *Th
gt; <https://www.naturalis.nl/lang-leve>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
>
>
> Met vriendelijke groet,
>
>
>
> Dr. Rutger A. Vos
>
> Researcher / Bioinformatician
>
>
>
>
>
>
>
> +
re-jig our user
documentation to reflect that 187G is the requestable cap for running on
the '192GB' nodes.
On Wed, 6 May 2020 at 11:12, Peter Kjellström wrote:
> On Wed, 6 May 2020 10:42:46 +0100
> Killian Murphy wrote:
>
> > Hi all.
> >
> > I'm probably making a
seems as though Slurm is working in powers of
1024, not powers of 1000.
I'm probably just confused about the unit definitions, or there is some
convention being applied here, but would appreciate some confirmation
either way!
Thanks.
Killian
--
Killian Murphy
Research Software Engineer
Wo
iversity of Melbourne, Victoria 3010 Australia
>
>
>
>
>
> On Wed, 6 May 2020 at 04:53, Theis, Thomas
> wrote:
>
> *UoM notice: External email. Be cautious of links, attachments, or
> impersonation attempts.*
> --
>
> Hey Killian,
>
g the configuration for the partition to include
> the qos, and restarting the service. Verifying with sacctmgr, I still have
> the same issue..
>
>
>
>
>
> *Thomas Theis*
>
>
>
> *From:* slurm-users *On Behalf Of
> *Killian Murphy
> *Sent:* Thursday, Ma
at: Tel.: +49 551 201-1510, Fax: -2150, E-Mail: g...@gwdg.de
>
> Geschäftsführer: Prof. Dr. Ramin Yahyapour
> Aufsichtsratsvorsitzender: Prof. Dr. Norbert Lossau
> Sitz der Gesellschaft: Göttingen
> Registergericht: Göttingen, Handelsregister-Nr. B 598
>
> Zertifiziert nach
s
> But it does not seem to take it.
>
> Any thing i am missing here?
>
> Majid
>
--
Killian Murphy
Research Software Engineer
Wolfson Atmospheric Chemistry Laboratories
University of York
Heslington
York
YO10 5DD
+44 (0)1904 32 1223
e-mail disclaimer: http://www.york.ac.uk/docs/disclaimer/email.htm
lo!
> >
> > this is a simple wrapper for sacct which prints the
> > output from sacct as table. So you can make a
> > "sacctml -j foo --long" even without two 8k displays ;-)
> >
> > cheers
>
>
>
--
Killian Murphy
Research Software Engineer
Wo
12 matches
Mail list logo