Hi,
my SLURM cluster has configured a partition with a "TimeLimit" of 8
hours. Now, a job is running during 9h30m and it has been not cancelled.
During these 9 hours and a half, a script has executed a "scontrol
update partition=mypartition state=down" for disabling this partition
(educationa
Hi,
I would like to know if it is possible to limit size of the generated
output file by a job using a lua script. I have seen "job_descriptor"
structure in slurm.h but I have not seen anything to limit that feature.
...I need this because a user submitted a job that has generated a 500
GB out
Hello,
Really, I don't know if my question is for this mailing list... but
I will explain my problem and, then, you could answer me whatever
you think ;)
I manage a SLURM clusters composed by 3 networks:
a gigabit network used for NFS shares (192.168.11.
Try running with "srun", not "mpirun"
Hello everybody,
i submit a job with sbatch command (sbatch myprog.sh). My prog.sh is
=
#!/bin/bash
#SBATCH --partition=part2
#SBATCH --ntasks=20
#SBATCH --nodelist=
#SBATCH --cpus-per-task=1
#SBAT
Hello,
after configuring SLURM-17.11.5 with accouting/mysql, it seems
databse is not recording any job. If I run "sacct -", I get
this output:
sacct: Jobs eligible from Tue May 08 00:00:00 2018 - Now
sacct: debug: Options selected:
opt_co
I'm using AccountingStorageType=accounting_storage/filetxt because I'm
running some tests. With "filetxt", could I get "account" (username)
with sacct?
Hello,
when I run "sacct", output is this:
JobID JobName Partition Account AllocCPUS State ExitCode
-- -- -- -- --
[...]
2810 bas nodo.q (null) 0 FAILED 2:0
2811 bash nod
Hello,
I have written my own job_submit.lua script for limiting "srun"
executions to one processor, one task and one node. If I test it with
"srun", all works fine. However, if now I try to run a sbatch job with
"-N 12" or "-n 2", job_submit.lua is also checked and, then, my job is
rejected b
My purpose with job_submit.lua script is to limit a "srun" with more
than one node and more than one CPU; in others words, "srun -N 1 -n
1". Because of this reason, in my future script I execute "if" for
comparing that values:
function slurm_job_submit(job_desc, part_list
Hello,
I'm writing my own "job_submit.lua" for controlling in what
partition a user can run a "srun" and how many CPUs and nodes are
allowed. I want to allow only "srun" in partition "interactive" with
only one core and one node. I have wrote this script but I'm gett
I'm trying to compile SLURM-17.02.7 with "lua" support executing
"./configure && make && make contribs && make install", but make does
nothing in src/plugins/job_submit/lua and I don't know why...
How do I have to compile that plugin? The rest of the plugins compile
with no problems (defaults,
Hello,
I would like to configure SLURM with two partitions:
one called "batch.q" only for batchs jobs
one called "interactive.q" only for batch jobs
What I want to get is a batch partition that doesn't allow "srun"
commands from the command line and
terpretation of this parameter varies by SchedulerType.
Multiple options may be comma separated.
max_script_size=#
Specify the maximum size of a batch script, in bytes. The
default value is 4 megabytes. Larger values may adversely impact system
performance.
_size=#
Specify the maximum size of a batch script, in
bytes. The default value is 4 megabytes. Larger values may adversely
impact system performance.
On 11/09/2017 03:56 AM, sysadmin.caos wrote:
Hello,
A researcher that is using a SLURM cluster (version 17.02.7) has
cr
Hello,
A researcher that is using a SLURM cluster (version 17.02.7) has created
a submit script whose size is 8 MB (yeah!!). I have read SLURM has a
limit size in 4MB... This limit, can be changed?
Thanks.
15 matches
Mail list logo