On 3/1/19 12:23 am, Mahmood Naderan wrote:
[mahmood@rocks7 ~]$ srun --spankx11 ./run_qemu.sh
srun doesn't look inside whatever you pass to it as it can be a binary,
that's why the directives are called #SBATCH as only sbatch will look at
those.
So you need to give srun those same arguments
Those errors appear to pop up when qemu can’t find enough RAM to run. If the
#SBATCH lines are only applicable for ‘sbatch' and not ‘srun' or ‘salloc', the
‘--mem=8G' setting there doesn’t affect anything.
- Does the srun version of the command work if you specify 'qemu-system-x86_64
-m 2048' o
Mark Hahn,
Using srun only returns a memory allocation error while salloc doesn't
[mahmood@rocks7 ~]$ srun --spankx11 ./run_qemu.sh
qemu-system-x86_64: -usbdevice tablet: '-usbdevice' is deprecated, please
use '-device usb-...' instead
qemu-system-x86_64: warning: host doesn't support requested
On 30/12/18 9:41 am, Mahmood Naderan wrote:
So, isn't possible to override that "default"? I mean the target node.
In the faq page it is possible to change the default command for salloc,
but I didn't see your confirmation.
The answer was in the FAQ page, but it's not something I've used befo
So, I get
[mahmood@rocks7 ~]$ salloc --spankx11 srun ./run_qemu.sh
salloc: Granted job allocation 281
srun: error: Bad value for --x11: (null)
srun: error: Invalid argument ((null)) for environment variable:
SLURM_SPANK__SLURM_SPANK_OPTION_spankx11_spankx11
salloc: Relinquishing job allocation 281
you earlier mentioned wanting to run an X-requiring script. why not just:
salloc --x11 srun ./whateveryourscriptwas
For that matter, however, what?s the advantage of ?salloc --x11 srun? vs. just
"srun --x11??
afaikt, the only difference is that srun's request is not flagged as
interactive (a
> On Jan 2, 2019, at 3:49 PM, Mark Hahn wrote:
>
>> [mahmood@rocks7 ~]$ salloc -n1 hostname
>> salloc: Granted job allocation 278
>> rocks7.jupiterclusterscu.com
>> salloc: Relinquishing job allocation 278
>> salloc: Job allocation 278 has been revoked.
>> [mahmood@rocks7 ~]$
>>
>> As you can se
[mahmood@rocks7 ~]$ salloc -n1 hostname
salloc: Granted job allocation 278
rocks7.jupiterclusterscu.com
salloc: Relinquishing job allocation 278
salloc: Job allocation 278 has been revoked.
[mahmood@rocks7 ~]$
As you can see whenever I run salloc, I see the rocks7 prompt which is the
login node.
I have included my login node in the list of nodes. Not all cores are
included though. Please see the output of "scontrol" below
[mahmood@rocks7 ~]$ scontrol show nodes
NodeName=compute-0-0 Arch=x86_64 CoresPerSocket=1
CPUAlloc=0 CPUTot=32 CPULoad=31.96
AvailableFeatures=rack-0,32CPUs
Ac
sallocdefaultcommand specified in slurm.conf will change the default
behavior when salloc is executed without appending a command and also
explain conflicting behavior between installations.
SallocDefaultCommand
Normally, salloc(1) will run the user's default shell
when a
I don’t think that’s true (and others have shared documentation regarding
interactive jobs and the S commands). There was documentation shared for how
this works, and it seems as if it has been ignored.
[novosirj@amarel2 ~]$ salloc -n1
salloc: Pending job allocation 83053985
salloc: job 83053985
I know very little about how SLURM works, but this sounds like it's a
configuration issue - that it hasn't been configured in a way that
indicates the login nodes cannot also be used as compute nodes. When I run
salloc on the cluster I use, I *always* get a shell on a compute node,
never on the log
Currently, users run "salloc --spankx11 ./qemu.sh" where qemu.sh is a
script to run a qemu-system-x86_64 command.
When user (1) runs that command, the qemu is run on the login node since
the user is accessing the login node. When user (2) runs that command, his
qemu process is also running on the l
Not sure what the reasons behind “have to manually ssh to a node”, but salloc
and srun can be used to allocate resources and run commands on the allocated
resources:
Before allocation, regular commands run locally, and no Slurm-related variables
are present:
=
[renfro@login ~]$ hostname
l
BTW, currently I can not run salloc on a node
[mahmood@rocks7 ~]$ salloc
salloc: Granted job allocation 272
[mahmood@rocks7 ~]$ exit
exit
salloc: Relinquishing job allocation 272
[mahmood@rocks7 ~]$ ssh compute-0-2
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Last l
I want to know if there any any way to push the node selection part on
slurm and not a manual thing that is done by user.
Currently, I have to manually ssh to a node and try to "allocate resources"
using salloc.
Regards,
Mahmood
On Wed, Jan 2, 2019 at 5:54 PM Henkel, Andreas wrote:
> Hi,
>
Hi,
As far as I understand salloc is used to make allocations but initiate a shell
(whatever the sallocdefaultcommand specifies) on the node you called salloc. If
you’re looking for an interactive session you‘ll probably have to use srun
--pty xterm . This will allocate the resources AND initia
Chris,
Can you explain why I can not get a prompt on a specific node while I have
passed the node name to salloc?
[mahmood@rocks7 ~]$ salloc
salloc: Granted job allocation 268
[mahmood@rocks7 ~]$ exit
exit
salloc: Relinquishing job allocation 268
[mahmood@rocks7 ~]$ salloc --nodelist=compute-0-2
s
So, isn't possible to override that "default"? I mean the target node. In
the faq page it is possible to change the default command for salloc, but I
didn't see your confirmation.
I really have difficults with interactive jobs that use x11 or binary files
or bash scripts. For some of them, srun d
On 30/12/18 7:16 am, Mahmood Naderan wrote:
Right...
I also tried
[mahmood@rocks7 ~]$ salloc --nodelist=compute-0-2 -n 1 -c 1 --mem=4G -p
RUBY -A y4
salloc: Granted job allocation 199
[mahmood@rocks7 ~]$ $
I expected to see the compute-0-2 prompt. Is that normal?
By default salloc gives yo
Right...
I also tried
[mahmood@rocks7 ~]$ salloc --nodelist=compute-0-2 -n 1 -c 1 --mem=4G -p
RUBY -A y4
salloc: Granted job allocation 199
[mahmood@rocks7 ~]$ $
I expected to see the compute-0-2 prompt. Is that normal?
Regards,
Mahmood
On Sun, Dec 30, 2018 at 6:06 PM Ing. Gonzalo E. Arroyo
$ salloc (Don't call the script in the same line)
I had a typo... srun
El dom., 30 de dic. de 2018 11:29, Ing. Gonzalo E. Arroyo <
garr...@ifimar-conicet.gob.ar> escribió:
> $ salloc ...
>
> After you have the node you run
>
> $ hostname
>
> $ stun hostname
>
> Check that difference then d
$ salloc ...
After you have the node you run
$ hostname
$ stun hostname
Check that difference then do the same with script
El dom., 30 de dic. de 2018 07:17, Mahmood Naderan
escribió:
> Hi
> I have read that salloc has some problem running bash scripts while it is
> OK with binary files. Th
Hi
I have read that salloc has some problem running bash scripts while it is
OK with binary files. The following script works fine from bash terminal,
but salloc is unable to to that.
$ cat slurm.sh
#!/bin/bash
./script.sh files_android.txt report/android.txt
$ salloc -n 1 -c 1 --mem=4G -p RUBY
24 matches
Mail list logo