://url6.mailanyone.net/scanner?m=1uNWYr-000Ah4a-3xXq&d=4%7Cmail%2F90%2F1749214200%2F1uNWYr-000Ah4a-3xXq%7Cin6i%7C57e1b682%7C10448314%7C12652688%7C6842E4993D26BA50F5233A708F1CC36D&o=%2Fphtl%3A%2Fatsmrtaaco%2Fi.C5%25ssecc&s=hLyv7Imbgx1A9uZG5XigVjAQneQ>
On Fri, 6 Ju
On our cluster we've noticed that if we use the native x11 slurm plugin
(PrologFlags=x11) then X applications work, but are really slow and
unresponsive. Even opening menus on graphical application is painfully slow.
On the same system if I do a direct ssh connection with ssh -YC from the head
theus going already
a little less so): https://github.com/rivosinc/prometheus-slurm-exporter
On Tue, Aug 20, 2024 at 12:40 AM Simon Andrews via slurm-users
mailto:slurm-users@lists.schedmd.com>> wrote:
Possibly a bit more elaborate than you want but I wrote a web based monitoring
system for
Possibly a bit more elaborate than you want but I wrote a web based monitoring
system for our cluster. It mostly uses standard slurm commands for job
monitoring, but I've also added storage monitoring which requires a separate
cron job to run every night. It was written for our cluster, but pr
Our cluster has developed a strange intermittent behaviour where jobs are being
put into a pending state because they aren't passing the AssocGrpCpuLimit, even
though the user submitting has enough cpus for the job to run.
For example:
$ squeue -o "%.6i %.9P %.8j %.8u %.2t %.10M %.7m %.7c %.20R