On Sunday, 14 October 2018 3:30:39 PM AEDT Steven Dick wrote:
> I've found that when creating a new cluster, slurmdbd does not
> function correctly right away. It may be necessary to restart
> slurmdbd at several points during the slurm installation process to
> get everything working correctly.
Hello,
We are migrating away from a Torque/Moab setup. For user convenience, we’re
trying to make the differences minimal.
I am wondering is there is a way to set the job walltime in the job environment
(to set $PBS_WALLTIME). It’s unclear to me how this information can be
retrieved on the wo
Hi,
I have removed a node, but the squeue command doesn't work and it
seems that it still searches for the missing node.
[root@rocks7 home]# > /var/log/slurm/slurmctld.log
[root@rocks7 home]# systemctl restart slurmctld
[root@rocks7 home]# systemctl restart slurmd
[root@rocks7 home]# rocks sync s
If you check the sbatch man page, there's no similar variable listed for the
job environment. You can:
(1) write/add to a spank plugin to set that in the job environment
(2) implement a patch yourself and submit it to SchedMD
(3) submit a request to SchedMD (if you have a contract) to have th
Hi.
I built a SLURM cluster and am able to successfully run jobs as root.
However, when I try to submit jobs as a regular user, I hit
permission problems.
username@console:[~] > srun -N1 /bin/hostname
slurmstepd: error: couldn't chdir to `/usr/home/username': Permission denied:
going to /tmp in
On 17-10-2018 20:13, Aravindh Sampathkumar wrote:
I built a SLURM cluster and am able to successfully run jobs as root.
However, when I try to submit jobs as a regular user, I hit permission
problems.
username@console:[~] > srun -N1 /bin/hostname
slurmstepd: error: couldn't chdir to `/usr/home