Hi Nicolas,
it looks like you have pam_access.so placed in your PAM stack *before*
pam_slurm_adopt.so so this may get in your way. In fact, the logs
indicate that it's pam_access and not pam_slurm_adopt that denies access
in the first place:
Apr 8 19:11:32 magi46 sshd[20542]: pam_access(sshd:ac
Yes they are all stored in LDAP directory :
root@magi3:~# id nicolas.greneche
uid=6001(nicolas.greneche) gid=6001(nicolas.greneche)
groupes=6001(nicolas.greneche)
root@magi46:~# id nicolas.greneche
uid=6001(nicolas.greneche) gid=6001(nicolas.greneche)
groupes=6001(nicolas.greneche)
UID are
Ok. Next I would check that the uid of the user is the same on the
compute node as the head node.
It looks like it is identifying the job, but doesn't see it as yours.
Brian Andrus
On 4/8/2022 1:40 PM, Nicolas Greneche wrote:
Hi Brian,
Thanks, SELinux is neither in strict or targeted mode,
Hi Brian,
Thanks, SELinux is neither in strict or targeted mode, I'm running SLURM
on Debian Bullseye with SELinux and Apparmor disabled.
Thank you for your suggestion,
Le 08/04/2022 à 21:43, Brian Andrus a écrit :
Check selinux.
Run "getenforce" on the node, if it returns 1, try setting "s
Check selinux.
Run "getenforce" on the node, if it returns 1, try setting "setenforce 0"
Slurm doesn't play well if selinux is enabled.
Brian Andrus
On 4/8/2022 10:53 AM, Nicolas Greneche wrote:
Hi,
I have an issue with pam_slurm_adopt when I moved from 21.08.5 to
21.08.6. It no longer wor
Hi,
I have an issue with pam_slurm_adopt when I moved from 21.08.5 to
21.08.6. It no longer works.
When I log straight to the node with root account :
Apr 8 19:06:49 magi46 pam_slurm_adopt[20400]: Ignoring root user
Apr 8 19:06:49 magi46 sshd[20400]: Accepted publickey for root from
172.16
Hello all,
We have enabled hyperthreading in our system
We were billing our users according to per core policy
But as we have enabled HT the billing is taking according to per thread
Example:
For no hyperthreading:
Suppose we are taking 7 core jobs it takes billing according to 14 core
For
Sorry, should have stated that before. I am running Slurm 20.11.3
on CentOS 8 Stream that I compiled myself back in June 2021.
I will try to arrange an upgrade in the next few weeks.
-- Paul Raines (http://help.nmr.mgh.harvard.edu)
On Fri, 8 Apr 2022 4:02am, Bjørn-Helge Mevik wrote:
Paul
Paul Raines writes:
> Basically, it appears using --mem-per-gpu instead of just --mem gives
> you unlimited memory for your job.
>
> $ srun --account=sysadm -p rtx8000 -N 1 --time=1-10:00:00
> --ntasks-per-node=1 --cpus-per-task=1 --gpus=1 --mem-per-gpu=8G
> --mail-type=FAIL --pty /bin/bash
> rtx