On 10/5/22 22:43, Carl Ponder wrote:
In our scaling tests, it's normal to expect the job run-times to reduce as
we increase the node-counts.
Is there a way in SLURM to limit the NODES*TIME product for a partition,
or do we just have to define a different partition (with a different
duration-lim
In our scaling tests, it's normal to expect the job run-times to reduce
as we increase the node-counts.
Is there a way in SLURM to limit the NODES*TIME product for a partition,
or do we just have to define a different partition (with a different
duration-limit) for each job size?
Hi everyone,
I'm trying to get X11 forwarding working on my cluster. I've read some
of the threads and web posts on X11 forwarding and most of the common
issues I'm finding seem to pertain to older versions of Slurm.
I log in from my workstation to the login node with ssh -X. I have x11
apps inst
The Slurm version is 20.11.8-1.
On 10/5/22 13:34, z1...@arcor.de wrote:
Yes, the comment field should work.
But, when I try to add a test job with sbatch:
"sbatch --comment='testcomment' --cpus-per-task 28 testjob.sh
Submitted batch job 319737
"
In the sacct output is this information missing:
"
sacct --format=User,JobID,Com
Yes, the comment field should work.
But, when I try to add a test job with sbatch:
"sbatch --comment='testcomment' --cpus-per-task 28 testjob.sh
Submitted batch job 319737
"
In the sacct output is this information missing:
"
sacct --format=User,JobID,Comment,ncpus
UserJobIDC