Not sure what the reasons behind “have to manually ssh to a node”, but salloc and srun can be used to allocate resources and run commands on the allocated resources:
Before allocation, regular commands run locally, and no Slurm-related variables are present: ===== [renfro@login ~]$ hostname login [renfro@login ~]$ echo $SLURM_TASKS_PER_NODE ===== After allocation, regular commands still run locally, Slurm-related variables are present, and srun runs commands on the allocated node (my prompt change inside a job is a local thing, not done by default): ===== [renfro@login ~]$ salloc salloc: Granted job allocation 147867 [renfro@login(job 147867) ~]$ hostname login [renfro@login(job 147867) ~]$ echo $SLURM_TASKS_PER_NODE 1 [renfro@login(job 147867) ~]$ srun hostname node004 [renfro@login(job 147867) ~]$ exit exit salloc: Relinquishing job allocation 147867 [renfro@login ~]$ ===== Lots of people get interactive shells on a reserved node with some variant of ‘srun --pty $SHELL -I’, which doesn’t require explicitly running salloc or ssh, so what are you trying to accomplish in the end? -- Mike Renfro, PhD / HPC Systems Administrator, Information Technology Services 931 372-3601 / Tennessee Tech University > On Jan 2, 2019, at 9:24 AM, Mahmood Naderan <mahmood...@gmail.com> wrote: > > I want to know if there any any way to push the node selection part on slurm > and not a manual thing that is done by user. > Currently, I have to manually ssh to a node and try to "allocate resources" > using salloc. > > > Regards, > Mahmood > > On Wed, Jan 2, 2019 at 5:54 PM Henkel, Andreas <hen...@uni-mainz.de> wrote: > Hi, > As far as I understand salloc is used to make allocations but initiate a > shell (whatever the sallocdefaultcommand specifies) on the node you called > salloc. If you’re looking for an interactive session you‘ll probably have to > use srun --pty xterm . This will allocate the resources AND initiate a shell > on one of the allocated nodes. > Best > Andreas > > Am 02.01.2019 um 14:43 schrieb Mahmood Naderan <mahmood...@gmail.com>: > >> Chris, >> Can you explain why I can not get a prompt on a specific node while I have >> passed the node name to salloc? >> >> [mahmood@rocks7 ~]$ salloc >> salloc: Granted job allocation 268 >> [mahmood@rocks7 ~]$ exit >> exit >> salloc: Relinquishing job allocation 268 >> [mahmood@rocks7 ~]$ salloc --nodelist=compute-0-2 >> salloc: Granted job allocation 269 >> [mahmood@rocks7 ~]$ exit >> exit >> salloc: Relinquishing job allocation 269 >> [mahmood@rocks7 ~]$ grep SallocDefaultCommand /etc/slurm/slurm.conf >> #SallocDefaultCommand = "xterm" >> [mahmood@rocks7 ~]$ >> >> >> >> As you can see the default SallocDefaultCommand is commented. So, I expected >> to override the default command. >> >> >> Regards, >> Mahmood >> >> >> >> >> On Sun, Dec 30, 2018 at 9:11 PM Mahmood Naderan <mahmood...@gmail.com> wrote: >> So, isn't possible to override that "default"? I mean the target node. In >> the faq page it is possible to change the default command for salloc, but I >> didn't see your confirmation. >> >> >> I really have difficults with interactive jobs that use x11 or binary files >> or bash scripts. For some of them, srun doesn't work while salloc works. On >> the other hand with srun I can choose a target nide while I can't do that >> with salloc. >> >> Has anybody faced such issues? >> >> On Sun, Dec 30, 2018, 20:15 Chris Samuel <ch...@csamuel.org> wrote: >> On 30/12/18 7:16 am, Mahmood Naderan wrote: >> >> > Right... >> > I also tried >> > >> > [mahmood@rocks7 ~]$ salloc --nodelist=compute-0-2 -n 1 -c 1 --mem=4G -p >> > RUBY -A y4 >> > salloc: Granted job allocation 199 >> > [mahmood@rocks7 ~]$ $ >> > >> > I expected to see the compute-0-2 prompt. Is that normal? >> >> By default salloc gives you a shell on the same node as you ran it on, >> with a job allocation that you can access by srun. >> >> You can read more about interactive shells here: >> >> https://slurm.schedmd.com/faq.html#prompt >> >> All the best, >> Chris >> -- >> Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC >>