Dave,
With previous versions, I followed some steps with the help of guys here.
Don't know about newer versions.
Please sent me a reminder in the next 24 hours and I will send you the
instructions. At the moment, I don't have access to the server.
Regards,
Mahmood
Sent from Gmail on Android
Greetings,
I'm using ubuntu-18.04 and slurm-18.08.1 compiled from source.
I followed the directions on:
https://slurm.schedmd.com/cgroups.html
And:
https://slurm.schedmd.com/cgroup.conf.html
That resulted in:
$ cat slurm.conf | egrep -i "cgroup|CR_"
ProctrackType=proctrack/cgroup
TaskPlugin=t
I believe you also need:
X11UseLocalhost no
> On Oct 15, 2018, at 7:07 PM, Dave Botsch wrote:
>
> Hi.
>
> X11 forwarding is enabled and works for normal ssh.
>
> Thanks.
>
> On Mon, Oct 15, 2018 at 09:55:59PM +, Rhian Resnick wrote:
>>
>>
>> Double check /etc/ssh/sshd_config allows X
Hi.
X11 forwarding is enabled and works for normal ssh.
Thanks.
On Mon, Oct 15, 2018 at 09:55:59PM +, Rhian Resnick wrote:
>
>
> Double check /etc/ssh/sshd_config allows X11 forwarding on the node as it is
> disable by default. (I think)
>
>
> X11Forwarding yes
>
>
>
>
> Rhian Resni
On Mon, 15 Oct 2018 at 17:59, Bjørn-Helge Mevik
wrote:
> Lachlan Musicman writes:
>
> > There's one thing that no one seems to have mentioned - I think you will
> > need to list it as an AllocNode in the Partition that you want it to be
> > able to allocate jobs to.
>
> It is a good idea if you
Double check /etc/ssh/sshd_config allows X11 forwarding on the node as it is
disable by default. (I think)
X11Forwarding yes
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 1
Wanted to test X11 forwarding. X11 forwarding works as a normal user
just ssh'ing to a node and running xterm/etc.
With srun, however:
srun -n1 --pty --x11 xterm
srun: error: Unable to allocate resources: X11 forwarding not available
So, what am I missing?
Thanks.
PS
srun --version
slurm 1
Hey, folks. Been working on a job submit filter to let us use otherwise idle
cores in our GPU nodes.
We’ve got 40 non-GPU nodes and 4 GPU nodes deployed, each has 28 cores. We’ve
had a set of partitions for the non-GPU nodes (batch, interactive, and debug),
and another set of partitions for the
If anyone saw my first post below, just posting an update. I was able to get
around this finally by setting the health check (nhc) to non-execute on boot. I
told the slurmd service to only start after all of the GPFS mounts are fully
present with a pre start script check and then and only then r
Hi,
What if I don’t have a version of SLURM that supports heterogeneous jobs but I
want to launch a parallel code using heterogeneous set of resources inside of a
homogeneous job that was submitted with sbatch? How do I do that?
For example, the following does not work:
#!/bin/sh --login
Lachlan Musicman writes:
> There's one thing that no one seems to have mentioned - I think you will
> need to list it as an AllocNode in the Partition that you want it to be
> able to allocate jobs to.
It is a good idea if you want to limit which hosts you are allowed to
submit jobs from, but it
11 matches
Mail list logo