tSTACK=infinity
LimitMSGQUEUE=12345678
See https://www.baeldung.com/linux/ulimit-limits-systemd-units for the list of
possibilities...
-- Kind regards
Franky
Van: slurm-users
mailto:slurm-users-boun...@lists.schedmd.com>>
namens Ozeryan, Vladimir
mailto:vladimir.ozer...@jhuapl.edu>>
Hello everyone,
I am having the following issue, on the compute nodes "POSIX message queues" is
set to unlimited for soft and hard limits.
However, when I do "srun -w node01 --pty bash -I" and then once I am in the
node I do "cat /proc/SLURMPID/limits" it shows that "Max msgqueue size" is set
t
Hello,
I would set "--ntasks"= number of cpus you want use for your job and remove
"--cpus-per-task" which would be set to 1 by default.
From: slurm-users On Behalf Of Selch,
Brigitte (FIDD)
Sent: Friday, September 22, 2023 7:58 AM
To: slurm-us...@schedmd.com
Subject: [EXT] [slurm-users] Submi
Hello everyone,
Has anyone here ever ran MCNP6.2 parallel job via Slurm scheduler?
I am looking for a simple test job to test my software compilation.
Thank you,
Vlad Ozeryan
Hello everyone,
I am trying to get access to Slurm REST API working.
JWT configured and token generated. All daemons are configured and running
"slurmdbd, slurmctld and slurmrestd". I can successfully get to Slurm API with
"slurm" user but that's it.
bash-4.2$ echo -e "GET /slurm/v0.0.39/jobs H
un 22, 2023 at 5:31 PM Ozeryan, Vladimir
mailto:vladimir.ozer...@jhuapl.edu>> wrote:
Hello,
We have the following configured and it seems to be working ok.
CgroupAutomount=yes
ConstrainCores=yes
ConstrainDevices=yes
ConstrainRAMSpace=yes
Vlad.
From: slurm-users
mailto:slurm-users-bou
DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 cgroup_enable=memory
swapaccount=1"
what other cgroup settings need to be set?
&& thank you!
-b
On Thu, Jun 22, 2023 at 4:02
--mem=5G. Should allocate 5G of memory per node.
Are your cgroups configured?
From: slurm-users On Behalf Of Boris
Yazlovitsky
Sent: Thursday, June 22, 2023 3:28 PM
To: slurm-users@lists.schedmd.com
Subject: [EXT] [slurm-users] --mem is not limiting the job's memory
APL external email warning:
You should be able to specify both partitions in your sbatch submission script,
unless there is some other configuration preventing this.
-Original Message-
From: slurm-users On Behalf Of Xaver
Stiensmeier
Sent: Monday, April 17, 2023 5:37 AM
To: slurm-users@lists.schedmd.com
Subject: [
Hello,
All you need to setup is the path to the Slurm binaries whether they are
available via shared file system or locally on the submit nodes (srun, sbatch,
sinfo, sacct, etc.) and possibly man pages.
Probably want to do this somewhere in /etc/profile.d or equivalent.
-Original Message--
1. I don’t see where you specifying a “Default” partition (DEFAULT=yes)
2. In “NodeName=* ” you have Gres=gpu:2 (All nodes on that line have 2
GPUs.) Create another “NodeName” line below and list your non-gpu nodes there
without the GRES flag.
From: slurm-users On Behalf Of Purvesh
Try with SBATCH script and use "mpirun" executable without "--mpi=pmi2".
From: slurm-users On Behalf Of masber
masber
Sent: Tuesday, March 1, 2022 12:54 PM
To: slurm-users@lists.schedmd.com
Subject: [EXT] [slurm-users] step creation temporarily disabled, retrying
(Requested nodes are busy)
AP
I am not sure about the rest of the Slurm world, but since I will most likely
update OpenMPI more often than Slurm, I've configured and built OpenMPI with
UCX and Slurm support and I think they are both default unless you specify
"--without" option. Works great so far!
-Original Message
Hello,
In your case 15 minute partition "TimeLimit" is a default value and should only
apply if user has not specified time limit for their job within their sbatch
script or srun command, or specified a lower value than partition default or
has done so incorrectly.
From: slurm-users On Behalf
max_script_size=#
Specify the maximum size of a batch script, in bytes. The default value is 4
megabytes. Larger values may adversely impact system performance.
I have users who've requested to increase this setting, what are some of system
performance issues might arise from changing that value
15 matches
Mail list logo