Re: [slurm-users] ulimits

2023-11-16 Thread Ozeryan, Vladimir
tSTACK=infinity LimitMSGQUEUE=12345678 See https://www.baeldung.com/linux/ulimit-limits-systemd-units for the list of possibilities... -- Kind regards Franky Van: slurm-users mailto:slurm-users-boun...@lists.schedmd.com>> namens Ozeryan, Vladimir mailto:vladimir.ozer...@jhuapl.edu>>

[slurm-users] ulimits

2023-11-16 Thread Ozeryan, Vladimir
Hello everyone, I am having the following issue, on the compute nodes "POSIX message queues" is set to unlimited for soft and hard limits. However, when I do "srun -w node01 --pty bash -I" and then once I am in the node I do "cat /proc/SLURMPID/limits" it shows that "Max msgqueue size" is set t

Re: [slurm-users] [EXT] Submitting hybrid OpenMPI and OpenMP Jobs

2023-09-22 Thread Ozeryan, Vladimir
Hello, I would set "--ntasks"= number of cpus you want use for your job and remove "--cpus-per-task" which would be set to 1 by default. From: slurm-users On Behalf Of Selch, Brigitte (FIDD) Sent: Friday, September 22, 2023 7:58 AM To: slurm-us...@schedmd.com Subject: [EXT] [slurm-users] Submi

[slurm-users] MCNP6.2 test

2023-07-19 Thread Ozeryan, Vladimir
Hello everyone, Has anyone here ever ran MCNP6.2 parallel job via Slurm scheduler? I am looking for a simple test job to test my software compilation. Thank you, Vlad Ozeryan

[slurm-users] Slurm Rest API error

2023-06-28 Thread Ozeryan, Vladimir
Hello everyone, I am trying to get access to Slurm REST API working. JWT configured and token generated. All daemons are configured and running "slurmdbd, slurmctld and slurmrestd". I can successfully get to Slurm API with "slurm" user but that's it. bash-4.2$ echo -e "GET /slurm/v0.0.39/jobs H

Re: [slurm-users] [EXT] --mem is not limiting the job's memory

2023-06-22 Thread Ozeryan, Vladimir
un 22, 2023 at 5:31 PM Ozeryan, Vladimir mailto:vladimir.ozer...@jhuapl.edu>> wrote: Hello, We have the following configured and it seems to be working ok. CgroupAutomount=yes ConstrainCores=yes ConstrainDevices=yes ConstrainRAMSpace=yes Vlad. From: slurm-users mailto:slurm-users-bou

Re: [slurm-users] [EXT] --mem is not limiting the job's memory

2023-06-22 Thread Ozeryan, Vladimir
DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 cgroup_enable=memory swapaccount=1" what other cgroup settings need to be set? && thank you! -b On Thu, Jun 22, 2023 at 4:02 

Re: [slurm-users] [EXT] --mem is not limiting the job's memory

2023-06-22 Thread Ozeryan, Vladimir
--mem=5G. Should allocate 5G of memory per node. Are your cgroups configured? From: slurm-users On Behalf Of Boris Yazlovitsky Sent: Thursday, June 22, 2023 3:28 PM To: slurm-users@lists.schedmd.com Subject: [EXT] [slurm-users] --mem is not limiting the job's memory APL external email warning:

Re: [slurm-users] [EXT] Submit sbatch to multiple partitions

2023-04-17 Thread Ozeryan, Vladimir
You should be able to specify both partitions in your sbatch submission script, unless there is some other configuration preventing this. -Original Message- From: slurm-users On Behalf Of Xaver Stiensmeier Sent: Monday, April 17, 2023 5:37 AM To: slurm-users@lists.schedmd.com Subject: [

Re: [slurm-users] [EXT] Software and Config for Job submission host only

2022-05-12 Thread Ozeryan, Vladimir
Hello, All you need to setup is the path to the Slurm binaries whether they are available via shared file system or locally on the submit nodes (srun, sbatch, sinfo, sacct, etc.) and possibly man pages. Probably want to do this somewhere in /etc/profile.d or equivalent. -Original Message--

Re: [slurm-users] [EXT] Distribute the node resources in multiple partitions and regarding job submission script

2022-04-12 Thread Ozeryan, Vladimir
1. I don’t see where you specifying a “Default” partition (DEFAULT=yes) 2. In “NodeName=* ” you have Gres=gpu:2 (All nodes on that line have 2 GPUs.) Create another “NodeName” line below and list your non-gpu nodes there without the GRES flag. From: slurm-users On Behalf Of Purvesh

Re: [slurm-users] step creation temporarily disabled, retrying (Requested nodes are busy)

2022-03-04 Thread Ozeryan, Vladimir
Try with SBATCH script and use "mpirun" executable without "--mpi=pmi2". From: slurm-users On Behalf Of masber masber Sent: Tuesday, March 1, 2022 12:54 PM To: slurm-users@lists.schedmd.com Subject: [EXT] [slurm-users] step creation temporarily disabled, retrying (Requested nodes are busy) AP

Re: [slurm-users] [EXT] Building Slurm with UCX support

2022-01-12 Thread Ozeryan, Vladimir
I am not sure about the rest of the Slurm world, but since I will most likely update OpenMPI more often than Slurm, I've configured and built OpenMPI with UCX and Slurm support and I think they are both default unless you specify "--without" option. Works great so far! -Original Message

Re: [slurm-users] TimeLimit parameter

2021-12-02 Thread Ozeryan, Vladimir
Hello, In your case 15 minute partition "TimeLimit" is a default value and should only apply if user has not specified time limit for their job within their sbatch script or srun command, or specified a lower value than partition default or has done so incorrectly. From: slurm-users On Behalf

[slurm-users] max_script_size

2021-09-13 Thread Ozeryan, Vladimir
max_script_size=# Specify the maximum size of a batch script, in bytes. The default value is 4 megabytes. Larger values may adversely impact system performance. I have users who've requested to increase this setting, what are some of system performance issues might arise from changing that value