Hi Wu,
As the name itselff suggests, job_desc_msg_t structure is used to communicate
the data about job between different components of slurm. job_record_t is the
one that is used by slurm to store the data about a job locally. I hope this
explanation helps.
It depends, if updating one effects
Hi Russell Jones,
did you try to stop firewall on the client cluster-cn02 ?
Patrick
Le 16/11/2020 à 19:20, Russell Jones a écrit :
> Here's some debug logs from the compute node after launching an
> interactive shell with the x11 flag. I see it show X11 forwarding
> established, then it ends wit
Hello,
check sshd settings (here are ours):
X11Forwarding yes
X11DisplayOffset 10
*X11UseLocalhost no*
Add PrologFlags in slurm.conf:
PrologFlags=x11
Cheers,
Barbara
On 11/16/20 7:20 PM, Russell Jones wrote:
Here's some debug logs from the compute node after launching an
interactive shel
Hello,
We see there are two job data structures job_desc_msg_t and job_record_t, and
are wondering the relation between them. What is each of them used for or how
are them used? If we update the value of one of the entries in job_desc_msg_t,
will the corresponding value change in job_record_t?
Here's some debug logs from the compute node after launching an interactive
shell with the x11 flag. I see it show X11 forwarding established, then it
ends with a connection timeout.
[2020-11-16T12:12:34.097] debug: Checking credential with 492 bytes of sig
data
[2020-11-16T12:12:34.098] _run_pr
Hi all. I can’t seem to change the account or QOS on a pending job:
$ scontrol update job where JobId=133628 set Account=def-eoliver QOS=normal
Update of this parameter is not supported: Account=def-eoliver
Request aborted
$ squeue -j 133628
JOBID USER ACCOUNT NAME
Hi List,
apologies if this has been asked before (or is obvious) - I did do some
reading & searching but can't quite figure the best way to achieve this.
Background - we have two productions clusters, both running SLURM. They
are not currently a multi-cluster setup; they are not running the s
Hello,
Thanks for the reply!
We are using Slurm 20.02.0.
On Mon, Nov 16, 2020 at 10:59 AM sathish
wrote:
> Hi Russell Jones,
>
> I believe you are using a slurm version older than 19.05. X11 forwarding
> code has been revamped and it works as expected starting from the 19.05.0
> version.
>
>
Hi Russell Jones,
I believe you are using a slurm version older than 19.05. X11 forwarding
code has been revamped and it works as expected starting from the 19.05.0
version.
On Mon, Nov 16, 2020 at 10:02 PM Russell Jones wrote:
> Hi all,
>
> Hoping I can get pointed in the right direction her
Hi all,
Hoping I can get pointed in the right direction here.
I have X11 forwarding enabled in Slurm, however I cannot seem to get it
working properly. It works when I test with "ssh -Y" to the compute node
from the login node, however when I try through Slurm the Display variable
looks very diff
Hi,
just to give a feedback about this problem: it was a firewall problem.
When using "salloc/srun" from the login node, the login node must accept
connections from the compute nodes even if nor slurmd nor slucrmctld are
running on this login node.
I had to widely open the firewall on the interna
I read that also, however the RTX cards are not really pre-Volta and when I
run the mps-server, the nvidia-smi tool gives me e.g.
+---
--+
| Processes: GPU
Memory |
| GPU
12 matches
Mail list logo