I am just running an interactive job with "srun -I --pty /bin/bash" and
then run "echo $SLURM_MEM_PER_NODE", but it shows nothing. Does it have
to be defined in any conf file?
On 20/08/18 09:59, Chris Samuel wrote:
On Monday, 20 August 2018 4:43:57 PM AEST Juan A. C
That variable does not exist somehow on my environment. Is it possible
my Slurm version (17.02.3) does not include it?
Thanks
On 17/08/18 11:04, Bjørn-Helge Mevik wrote:
Yes. It is documented in sbatch(1):
SLURM_MEM_PER_CPU
Same as --mem-per-cpu
SLURM_MEM_PER_N
Dear Community,
does anyone know whether there is an environment variable, such as
$SLURM_CPUS_ON_NODE, but for the requested RAM (by using --mem argument)?
Thanks
Dear Slurm users,
Is it possible to allocate more resources for a current job on an
interactive shell? I just allocate (by default) 1 core and 2Gb RAM:
srun -I -p main --pty /bin/bash
The node and queue where the job is located has 120 Gb and 4 cores
available.
I just want to use more core
ope! You may not run this
executable on this partition"
However it might be worth contacting the authors and discussing this.
On 15 January 2018 at 14:20, Juan A. Cordero Varelaq
mailto:bioinformatica-i...@us.es>> wrote:
But what if the user knows the path to such applicatio
2018 11:31 AM, Juan A. Cordero Varelaq wrote:
Dear Community,
I have a node (20 Cores) on my HPC with two different partitions: big
(16 cores) and small (4 cores). I have installed software X on this
node, but I want only one partition to have rights to run it.
Is it then possible to restrict the
Dear Community,
I have a node (20 Cores) on my HPC with two different partitions: big
(16 cores) and small (4 cores). I have installed software X on this
node, but I want only one partition to have rights to run it.
Is it then possible to restrict the execution of an specific application
to a
put in place.
-Paul Edmon-
On 1/4/2018 6:44 AM, Juan A. Cordero Varelaq wrote:
Hi,
A couple of jobs have been running for almost one month and I would
like to change resource limits to prevent users from running so much
time. Besides, I'd like to set AccountingStorageEnforce to qos
Hi,
A couple of jobs have been running for almost one month and I would like
to change resource limits to prevent users from running so much time.
Besides, I'd like to set AccountingStorageEnforce to qos,safe. If I make
such changes would the running jobs be stopped (the user running the
job
Hi,
I have the following configuration:
* head node: hosts the slurmctld and the slurmdbd daemons.
* compute nodes (4): host the slurmd daemons.
I need to change a couple of lines of the slurm.conf corresponding to
the slurmctld. If I restart its service, should I also have to restart
the s
status/873177525903609857
On 21 November 2017 at 06:35, Philip Kovacs <mailto:pkde...@yahoo.com>> wrote:
Try adding this to your conf:
PluginDir=/usr/lib64/slurm
On Monday, November 20, 2017 6:48 AM, Juan A. Cordero Varelaq
mailto:bioinformatica-i...@us.es>>
0/11/17 12:11, Lachlan Musicman wrote:
On 20 November 2017 at 20:50, Juan A. Cordero Varelaq
mailto:bioinformatica-i...@us.es>> wrote:
$ systemctl start slurmdbd
Job for slurmdbd.service failed because the control process
exited with error code. See "systemctl st
Hi,
Slurm 17.02.3 was installed on my cluster some time ago but recently I
decided to use SlurmDBD for the accounting.
After installing several packages (slurm-devel, slurm-munge,
slurm-perlapi, slurm-plugins, slurm-slurmdbd and slurm-sql) and MariaDB
in CentOS 7, I created an SQL database:
13 matches
Mail list logo