On 08-06-2020 18:07, Jeffrey T Frey wrote:
There's a Slurm PAM module you can use to gate ssh access -- basically it
checks to see if the user has a job running on the node and moves any ssh
sessions to the first cgroup associated with that user on that node. If you
don't use cgroup resource limiting I think it just gates access w/o any such
cgroup assignments.
The pam_slurm_adopt[1] module is used by lots of Slurm sites for
restricting access by SSH. See the discussion in
https://wiki.fysik.dtu.dk/niflheim/Slurm_configuration#pam-module-restrictions
/Ole
[1] https://slurm.schedmd.com/pam_slurm_adopt.html
On Jun 8, 2020, at 12:01 , Durai Arasan <arasan.du...@gmail.com> wrote:
Hi Jeffrey,
Thanks for the clarification.
But this is concerning, as the users will be able to ssh into any node. How do
you prevent that?
Best,
Durai
On Mon, Jun 8, 2020 at 5:55 PM Jeffrey T Frey <f...@udel.edu> wrote:
User home directories are on a shared (NFS) filesystem that's mounted on every
node. Thus, they have the same id_rsa key and authorized_keys file present on
all nodes.
On Jun 8, 2020, at 11:42 , Durai Arasan <arasan.du...@gmail.com> wrote:
Ok, that was useful information.
So when you provision user accounts, you add the public key to
.ssh/authorized_keys of *all* nodes on the cluster? Not just the login nodes.. ?
When we provision user accounts on our Slurm cluster we still add .ssh,
.ssh/id_rsa (needed for older X11 tunneling via libssh2), and add the public
key to .ssh/authorized_keys.
Thanks,
Durai