Hello everyone,
Sorry for might be a trivial question for most of you.
I am trying to understand cpu allocation in slurm.
The goal is to launch a batch job on one node. while the batch
itself will run several jobs in parallel each allocated a subset of
the cpus g
Thanks, I will look into it.
On 27/05/2018 13:22, Lachlan Musicman
wrote:
On 27 May 2018 at 18:56, Nadav Toledo <nadavtol...@cs.technion.ac.il>
wrote:
Hey L
Hey Lachlan,
Can you specify how/where you set the walltime and which factor you
use in the accounting system to deprioritse?
Thanks, Nadav
On 27/05/2018 11:34, Lachlan Musicman
wrote:
On 27 May 2018 at 18:23, Nadav Toledo
Hello forum,
I am trying to deal with idle session for some time, and haven't
found a solution i am happy with.
The scenario is as follow: users using srun for jupyter-lab(which is
fine and even encouraged by me) on image processing cluster with
gpus.
pr
Hello everyone,
After fighting with x11 forwarding couple of weeks, I think i've got
a few tips that can help others.
I am using slurm 17.11.6 with builtin x11 forwarding with ubuntu
server distro, all servers in cluster share /home via beegfs.
slurm was compile
Maybe you've got a mistake?
replace:
echo -e "optional\tx11.so" >> ./plugstack.conf
with
echo -e "optional\x11.so" >> ./plugstack.conf
On 15/05/2018 21:35, Mahmood Naderan
wrote:
Hi,
I followed the steps described in [1]. However, srun
Hey all,
We used to numbers from the following commands:
sinfo -o %G(as suggested above) - gives total gpu in cluster
squeue -o %b - gives amount of gpu in use for each running job
sum all the numbers under %b gives you gpu in use in cluster
pestat was sug
Hey Rob,
Perhaps something in the direction of srun --ntasks=2 --gres=gpu:4
nvidia-smi , help you?
this will run two tasks each with 4 gpu and execute nvidia-smi,
the output should be similar of doing nvidia-smi on one 8 gpu server
On 22/02/2018 01:26, Rob
.
for this need squeue -o %b is enough
But I am sure there is a need for pestat to print the gres info as
well, you already atleast helping yair and myself.
Thanks, Nadav
On 13/02/2018 17:41, Ole Holm Nielsen
wrote:
On 02/13/2018 08:13 AM, Nadav Toledo wrote
displaying the allocated gres.
Yair.
On Tue, Feb 13 2018, Nadav Toledo wrote:
Hello everyone,
Does anyone know of way to get amount of idle gpu per node or for all cluster ?
sinfo -o %G gives the total amount of gres resource for each node. Is there a
way to get the idle a
Hello everyone,
Does anyone know of way to get amount of idle gpu per node or for
all cluster ?
sinfo -o %G gives the total amount of gres resource for each node.
Is there a way to get the idle amount same as you can get for cpu
(%C)?
Perhaps if one use
Thank you for sharing
it's indeed of interest of others...
On 23/01/2018 01:20, Kilian Cavalotti
wrote:
Hi all,
We (Stanford Research Computing Center) developed a SPANK plugin which
allows users to choose the GPU compute mode [1] for their jobs.
[1] h
It worked out.
srun -N2 --ntasks=25 --pty devtest
actually ran on 25 cores, 16 of the first and 9 on the second
Thanks alot.
On 21/01/2018 08:33, Nadav Toledo
wrote:
Sorry for delay answer.
First thank you for pointing
also running dd if=/dev/zero of=/dev/null & , 5 times only
takes 2 cores
Am i missing something?
On 18/01/2018 10:16, Loris Bennett
wrote:
Nadav Toledo writes:
Nadav Toledo writes:
Hey everyone,
We've j
srun -c17 --pty bash
srun: error: CPU count per node can not be satisfied
srun: error: Unable to allocate resources: Requested node
configuration is not available
On 18/01/2018 08:37, Loris Bennett
wrote:
Nadav Toledo writes
Hey everyone,
We've just setup a slurm cluster with few nodes each has 16 cores.
Is it possible to submit a job for 17cores or more?
If not, is there a workaround?
Thanks in advance, Nadav
Hey everyone,
Perhaps I am asking a basic question, but I really dont understand
how the preemption works.
The scenario(simplified for the example) is like this:
Nodes:
NodeName=A1 CPUS=2 RealMemory=128906 TmpDisk=117172
NodeName=A2 CPUS=30 RealMemory=128
17 matches
Mail list logo