is there a way to find the utilization per Node?
Regards
Navin.
On Wed, Nov 18, 2020 at 10:37 AM navin srivastava
wrote:
> Dear All,
>
> Good Day!
>
> i am seeing one strange behaviour in my environment.
>
> we have 2 clusters in our environment one acting as a database server and
> have pointe
Dear All,
Good Day!
i am seeing one strange behaviour in my environment.
we have 2 clusters in our environment one acting as a database server and
have pointed the 2nd cluster to the same database.
-- -
hpc1 155.250.126.30 6817 8192 1
After 9 months of development and testing we are pleased to announce the
availability of Slurm version 20.11.0!
Slurm 20.11 includes a number of new features including:
- Overhaul of the job step management and launch code, alongside
improved GPU task placement support.
- A new "Interactive
And if I try to run another job and all resources in that one node are all
used then the job is pur to pending. I'm running srun getting pseudo
terminals allocations to install some Spack packages. this node has 40
cores (2 sockets @20 cores each). same specs and memory size for the other
nodes. An
Thank you!
I do have X11UseLocalhost set to no, X11Forwarding set to yes:
[root@cluster-cn02 ssh]# sshd -T | grep -i X11
x11displayoffset 10
x11maxdisplays 1000
x11forwarding yes
x11uselocalhost no
No firewalls on this network between the login node and compute node.
On Tue, Nov 17, 2020 at 1:
Hi all,
We have around 50 accounts, each has its own GrpTRES limits. We want to add
another set of accounts (probably another 50) with different priority which
will have GrpTRESMins, such that users could "buy" TRES*minutes with higher
priority.
For that we require that the GrpTRESMins won't get
Il 09/11/20 12:53, Diego Zuccato ha scritto:
> Seems my corrections actually work only for single-node jobs.
> In case of multi-node jobs, it only considers the memory used on one
> node, hence understimates the real efficiency.
> Someone more knowledgeable than me can spot the error?Seems I manag