The pending jobs missing in `sprio` output have been set priority manually
before. I think that explains why they disappear.
Best,
Jianwen
> On Sep 16, 2020, at 4:00 PM, SJTU wrote:
>
> Hi,
>
>I’m using spiro of SLURM 19.05 to inspect job queuing on my cluster. I
> found sprio prints an
Thank you, Paul. I'll try this workaround.
Best,
Jianwen
> On Sep 16, 2020, at 9:31 PM, Paul Edmon wrote:
>
> This is a feature of suspend. When Slurm suspends a job it actually does not
> leave the cpus used by that job reserved but instead pauses the job and keeps
> memory reserved but n
Hi Ahmet,
I know that slurm isn't integrated with FlexLM, but just wondered why sacctmgr
writing licenses in lower case,
despite database itself supports upper case too.
And maybe there is some configuration parameter to change it ...
Thank you
Alexey
-Original Message-
From: mercan
S
Hello,
We are trying to work with remote licenses on slurmdbd and AWS RDS.
sacctmgr insert license in database only as case insensitive - always lower
case.
We have a lot of EDA licenses which use mixed upper/lower case in feature names,
so it might be a problem to work with lower case only.
I a
On Wed, 16 Sep 2020, Niels Carl W. Hansen wrote:
If you explicitely specify the account, f.ex. 'sbatch -A myaccount'
then 'slurm.log_info("submit -- account %s", job_desc.account)'
works.
Great, thanks - that's working!
Of course I have other problems... :(
Cheers,
Mark
--
Mark Dixon Tel:
Hi;
The Slurm license feature is just a simple counter, not more than that.
It can not connect to the license server to read or update the licenses.
Slurm only count used license and subtract from setted license count. If
result is zero, does not run new jobs. The license feature names and
sl
This is a feature of suspend. When Slurm suspends a job it actually
does not leave the cpus used by that job reserved but instead pauses the
job and keeps memory reserved but not the cpus.
If you want to pause jobs and not have contention you need to use
scancel and use the:
*-s*, *--signal
Hi,
I am using SLURM 19.05 and found that SLURM may launch jobs onto nodes with
suspended jobs, which leads to resource contention after the suspended jobs'
restoration. Steps to reproduce this issue are:
1. Launch 40 one-core jobs on a 40-core compute node.
2. Suspend all 40 jobs on that comp
Il 16/09/20 10:44, Mark Dixon ha scritto:
> It seems "pairs" wasn't lying, job_desc really is empty?
Nope. In my case, at least some fields are populated: .partition,
.max_cpus (usually "a lot"), .max_nodes, .min_cpus, .min_nodes .
I also tried adding
local j = {}
j.uid = submit_uid or -1
If you explicitely specify the account, f.ex. 'sbatch -A myaccount'
then 'slurm.log_info("submit -- account %s", job_desc.account)'
works.
/Niels Carl
On Wed, 16 Sep 2020, Mark Dixon wrote:
> On Wed, 16 Sep 2020, Diego Zuccato wrote:
> ...
> > From the source it seems these fields are availabl
On Wed, 16 Sep 2020, Diego Zuccato wrote:
...
From the source it seems these fields are available:
account
comment
direct_set_prio
gres
job_id Always nil ? Maybe no JobID yet?
job_state
licenses
max_cpus
max_nodes
min_
Il 15/09/20 19:47, Mark Dixon ha scritto:
> I was expecting job_desc to be iterated over and show some interesting
> stuff in my log, but it looks empty. Any ideas why that might be, please?
Same thing I noticed. I needed it to check which fields were available,
but it seems ipairs doesn't work as
Hi,
I’m using spiro of SLURM 19.05 to inspect job queuing on my cluster. I
found sprio prints an incomplete list of pending jobs, much less than ones from
`squeue --state=pending` . No extra options seem to be available for sprio. I
appreciate any suggestion.
Thank you!
Jianwen
[root@
13 matches
Mail list logo