[slurm-users] Re: srun weirdness

2024-05-17 Thread Patryk Bełzak via slurm-users
Hi, I wonder where does this problems come from, perhaps I am missing something, but we never had such issues with limits since we have it set on worker nodes in /etc/security/limits.d/99-cluster.conf: ``` * softmemlock 4086160 #Allow more Memory Locks for MPI * hardmemlock

[slurm-users] Re: srun weirdness

2024-05-17 Thread greent10--- via slurm-users
Hi, The problem comes from if the login nodes (or submission hosts) have different ulimits – maybe the submission hosts are VMs and not physical servers. Then the ulimits will be passed from submission hosts in Slurm to the jobs compute node by default which can results in different settings b

[slurm-users] Re: srun weirdness

2024-05-17 Thread Patryk Bełzak via slurm-users
We do have diferent limits on submit host, and I believe that until we put `limits.d/99-cluster.conf` file the limits were passed to jobs, but can't tell for sure, it was long time ago. Still, modyfying the `limits.d` on cluster nodes may be a different approach and solution to formentioned issu