Hi Loris,
I know, it has been some time, but I have one additional remark.
If you just use ssh -X to login to the nodes, you will have a plain ssh
session, which means, none of SLURMs environment variables will be set.
So if your X11-Jobs are in need of that, you will have to use X11
forwarding
Hello,
I've finally got the job throughput/turnaround to be reasonable in our cluster.
Most of the time the job activity on the cluster sets the default QOS to 32
nodes (there are 464 nodes in the default queue). Jobs requesting nodes close
to the QOS level (for example 22 nodes) are scheduled
Hi Marcus,
This is a good point, thanks! Maybe the salloc variant isn't so good as
a general solution.
Cheers,
Loris
Marcus Wagner writes:
> Hi Loris,
>
> I know, it has been some time, but I have one additional remark.
> If you just use ssh -X to login to the nodes, you will have a plain ss
Your slurm.conf line doesn't specify the node's physical memory:
NodeName=ozd2485u Gres=gpu:2 Sockets=2 CoresPerSocket=14
ThreadsPerCore=2 State=UNKNOWN
See "man slurm.conf":
RealMemory
Size of real memory on the node in megabytes (e.g.
"2048"). The default value is 1.
On
Your slurm.conf line doesn't specify the node's physical memory:
NodeName=ozd2485u Gres=gpu:2 Sockets=2 CoresPerSocket=14
ThreadsPerCore=2 State=UNKNOWN
See "man slurm.conf":
RealMemory
Size of real memory on the node in megabytes (e.g.
"2048"). The default value is 1.
On
It might be useful to include the various priority factors you've got
configured. The fact that adjusting PriorityMaxAge had a dramatic effect
suggests that the age factor is pretty high- might be worth looking at that
value relative to the other factors.
Have you looked at PriorityWeightJobSize?