Hi all,
Does SLURM support similar functionality to the PBS options
#PBS -w stagein
#PBS -w stageout
Looking through the docs and even the qsub wrapper commands, I don't
see an analogous way to implement the same with SLURM via sbatch. I
see the documentation about burst buffers, but that doesn'
Hi! I just upgraded to 17.11.2 and when i try to start slurmctld i get
this :
slurmctld[12552]: fatal: You are running with a database but for some
reason we have less TRES than should be here (4 < 5) and/or the
"billing" TRES is missing. This should only happen if the database is
down after an
On Monday, 13 November 2017 11:18:08 CET Nicholas McCollum wrote:
> Now that there is a slurm-users mailing list, I thought I would share
> something with the community that I have been working on to see if anyone
> else is interested in it. I have a lot of students on my cluster and I
> really wa
On Friday, 5 January 2018 12:08:42 CET Nicolò Parmiggiani wrote:
> hi,
>
> thank you for your answer.
>
> If I set this number to 10 and I have 20 nodes, so the maximum number of
> CPU in this partition will be 10*20 or only 10 ?
It is a limit per node therefore 10*20 CPUs (in case of a CPU with
I have for instance two nodes:
1) 30 CPUs
2) 70 CPUs
so in total I have 100 CPUs.
Two partition
1) low priority
2) high priority
I also have two data processing pipelines, the first one uses low priority
partition and it can use all CPUs available. The second one uses high
priority partition a
Are the partitions dynamic? i.e. is the desire to limit based on partitions
that float across a larger number of nodes and so the desire is to limit a
particular partition to maximum amount of those nodes?
~~
Ade
From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On Behalf Of
hi,
thank you for your answer.
If I set this number to 10 and I have 20 nodes, so the maximum number of
CPU in this partition will be 10*20 or only 10 ?
Thank You.
2018-01-05 11:59 GMT+01:00 Markus Köberl :
> On Friday, 5 January 2018 10:55:47 CET Nicolò Parmiggiani wrote:
> > Hi,
> >
> > ca
On Friday, 5 January 2018 10:55:47 CET Nicolò Parmiggiani wrote:
> Hi,
>
> can someone help me? How can I limit the maximum number of CPUs that a
> partition can use.
Have a look at the option MaxCPUsPerNode for partitons
regards
Markus Köberl
--
Markus Koeberl
Graz University of Technology
Sig
Hi,
can someone help me? How can I limit the maximum number of CPUs that a
partition can use.
Thank You.
2018-01-02 18:28 GMT+01:00 Nicolò Parmiggiani
:
> I have only one server and two data analysis pipelines, one for standard
> jobs and other one for high priority job that can be triggered s
I use the following test:
if job_desc.pn_min_memory == slurm.NO_VAL64 then
...
for testing that neither --mem nor --mem-per-cpu has been specified. It
seems to work. (slurm 17.02.7)
--
Regards,
Bjørn-Helge Mevik, dr. scient,
Department for Research Computing, University of Oslo
signat
Hi Ashlee,
the min_mem_per_cpu parameter is in fact not nil. If it is not set by
the user, the value is 9223372036854775806
Best
Marcus
On 01/05/2018 08:58 AM, Yinping Ma wrote:
hello, shenglong wang
Thanks for your relpy, I tried this before.
I write this in job_submit.lua:
hello, shenglong wang
Thanks for your relpy, I tried this before.
I write this in job_submit.lua:
-
function slurm_job_submit(job_desc, part_list, submit_uid)
if job_desc.min_mem_per_cpu ~= nil the
12 matches
Mail list logo