Hi
Did you disable the pam_systemd.so also from the module files included by
the sshd pam file ?
I am asking because I had this problem when I configured the
pam_slurm_adopt
Cheers, Massimo
On Fri, Apr 18, 2025 at 5:28 PM Robert Kudyba via slurm-users <
slurm-users@lists.schedmd.com> wrote:
>
Dear all
With the pam_slurm_adopt module as far as I understand you can ssh to a
worker node if there is at least a job running on the node by that user.
If there are multiple jobs, if I am not wrong you will be "mapped" to the
last job started on the node. And, if you are using cgroups, you will
-overlap --jobid JOBIDNUM bash
>
>
>
> -- Paul Raines (http://help.nmr.mgh.harvard.edu)
>
>
>
> On Mon, 14 Apr 2025 4:30am, Massimo Sgaravatto via slurm-users wrote:
>
> >External Email - Use Caution
> >
> > Dear all
> >
> > With the pam_sl
Dear all
We have just installed a small SLURM cluster composed of 12 nodes:
- 6 CPU only nodes: 2 Sockets=2, 96 CoresPerSocket 2, ThreadsPerCore=2, 1.5
TB of RAM
- 6 nodes with also GPUS: same conf of the CPU-only node + 4 H100 per node
We started with a setup with 2 partitions:
- a 'onlycpu
e sure the Weight are such that the
> non-GPU nodes get used first.
>
> Disclaimer: I'm thinking out loud, I have not tested this in practice,
> there may be something I overlooked.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Mon, Mar