Thanks for the logical explanation, Paul. So when I rewrite my user
documentation, I'll mention using `salloc` instead of `srun`.
Yes, we do have `LaunchParameters=use_interactive_step` set on our cluster, so
salloc gives a shell on the allocated host.
Best,
Will
--
slurm-users mailing list -
He's talking about recent versions of Slurm which now have this option:
https://slurm.schedmd.com/slurm.conf.html#OPT_use_interactive_step
-Paul Edmon-
On 2/28/2024 10:46 AM, Paul Raines wrote:
What do you mean "operate via the normal command line"? When
you salloc, you are still on the logi
What do you mean "operate via the normal command line"? When
you salloc, you are still on the login node.
$ salloc -p rtx6000 -A sysadm -N 1 --ntasks-per-node=1 --mem=20G
--time=1-10:00:00 --gpus=2 --cpus-per-task=2 /bin/bash
salloc: Pending job allocation 3798364
salloc: job 3798364 queued
salloc is the currently recommended way for interactive sessions. srun
is now intended for launching steps or MPI applications. So properly you
would salloc and then srun inside the salloc.
As you've noticed with srun you tend lose control of your shell as it
takes over so you have background