Script,
> Multiple Datasets). We eventually wrote an abstract utility to try to help
> them with the process:
>
>
> https://github.com/jtfrey/job-templating-tool
>
>
>
> May be of some use to you.
>
>
>
>
> On Jul 15, 2020, at 16:13 , c b wrote:
>
> I'
I'm trying to run an embarrassingly parallel experiment, with 500+ tasks
that all differ in one parameter. e.g.:
job 1 - script.py foo
job 2 - script.py bar
job 3 - script.py baz
and so on.
This seems like a case where having a slurm array hold all of these jobs
would help, so I could just submi
Hi,
I have a bunch of jobs that according to the slurm status have been running
for 30+ minutes, but in reality aren't running. When i go to the node
where the job is supposed to be, the processes aren't there (not showing up
in top or ps) and the job's stdout/stderr logs are empty. I know it's
running
simultaneously on each machine.
thanks
> Best regards
> Jürgen
>
> --
> Jürgen Salk
> Scientific Software & Compute Services (SSCS)
> Kommunikations- und Informationszentrum (kiz)
> Universität Ulm
> Telefon: +49 (0)731 50-22478
> Telefax: +49 (0)731 50
ailable as far as slurm is concerned.
>
> Brian
> On 11/1/2019 10:52 AM, c b wrote:
>
> yes, there is enough memory for each of these jobs, and there is enough
> memory to run the high resource and low resource jobs at the same time.
>
> On Fri, Nov 1, 2019 at 1:37 PM Brian Andrus
e isn't enough memory available for it.
>
> Brian Andrus
> On 11/1/2019 7:42 AM, c b wrote:
>
> I have:
> SelectType=select/cons_res
> SelectTypeParameters=CR_CPU_Memory
>
> On Fri, Nov 1, 2019 at 10:39 AM Mark Hahn wrote:
>
>> > In theory, these sm
I tried setting a 5 minute time limit on some low resource jobs, and one
hour on high resource jobs, but my 5 minute jobs are still waiting behind
the hourlong jobs.
Can you suggest some combination of time limits that would work here?
On Fri, Nov 1, 2019 at 11:08 AM c b wrote:
> On my
rm knows,
> the low priority jobs will take longer to finish than just waiting for the
> current running jobs to finish.
>
>
>
> John
>
>
>
>
>
> *From: *slurm-users on behalf of
> c b
> *Reply-To: *Slurm User Community List
> *Date: *Friday, November 1,
I have:
SelectType=select/cons_res
SelectTypeParameters=CR_CPU_Memory
On Fri, Nov 1, 2019 at 10:39 AM Mark Hahn wrote:
> > In theory, these small jobs could slip in and run alongside the large
> jobs,
>
> what are your SelectType and SelectTypeParameters settings?
> ExclusiveUser=YES on partitio
Hi,
Apologies for the weird subject line...I don't know how else to describe
what I'm seeing.
Suppose my cluster has machines with 8 cores each. I have many large high
priority jobs that each require 6 cores, so each machine in my cluster runs
one of each of these jobs at a time. However, I als
Hi,
I have a cluster where machines are used for both compute jobs and for
interactive research by humans - we're resource constrained so getting
machines dedicated to slurm is a tough task. What i'd like to do is,
during normal weekday work hours, take some machines entirely out of the
cluster,
11 matches
Mail list logo