Greetings,
The sbatch documentation under the --partition option says: "If the job can use
more than one partition, specify their names in a comma separate list" . When I
use two partition names separated by comma I get "sbatch: error: Batch job
submission failed: Invalid qos specification".
On 2/11/19 10:26 AM, Colas Rivière wrote:
Hello,
I'm trying to update slurm to the latest stable version 18.08.5-2. Our
cluster uses CentOS 6.8 and updating it tricky because of Lustre support.
According to https://slurm.schedmd.com/platforms.html, CentOS 6 is
still supported.
However, `yum-
Also, make sure no 3rd party packages installed software that installs
files in the systemd directories. The legacy spec file still checks for
systemd files to be present:
if [ -d /usr/lib/systemd/system ]; then
install -D -m644 etc/slurmctld.service
$RPM_BUILD_ROOT/usr/lib/systemd/system/s
Cola,
You need to use the legacy spec file from the contribs directory:
ls -l slurm-18.08.5/contribs/slurm.spec-legacy
-rw-r--r-- 1 mrobbert mrobbert 38574 Jan 30 11:59
slurm-18.08.5/contribs/slurm.spec-legacy
Mike
On 2/11/19 9:26 AM, Colas Rivière wrote:
> Hello,
>
> I'm trying to update slur
Hello,
I'm trying to update slurm to the latest stable version 18.08.5-2. Our
cluster uses CentOS 6.8 and updating it tricky because of Lustre support.
According to https://slurm.schedmd.com/platforms.html, CentOS 6 is still
supported.
However, `yum-builddep slurm-18.08.5-2/slurm.spec` fails,
I’m assuming you have LDAP and Slurm already working on all your nodes, and
want to restrict access to two of the nodes based off of Unix group membership,
while letting all users access the rest of the nodes.
If that’s the case, you should be able to put the two towers into a separate
partitio
... and to force users to supply --account, something like this in the
job_submit.lua should work:
-- If account is missing: fail
if job_desc.account == nil then
slurm.log_info("job from uid %d with missing account: Denying.",
job_desc.user_id)
slurm.user_msg
Dear,
Can someone help me to configure several machines under Slurm with
different rights per user group?
Or can someone redirect me to a tuoriel that explains these different
points?
Regards,
Jean-Sébastien
Le lun. 28 janv. 2019 à 17:52, Jean-Sébastien Lerat
a écrit :
> Hi,
>
> I have two to
Hi,
> On 05.02.19 16:46, Ansgar Esztermann-Kirchner wrote:
> > [...]-- we'd like to have two "half nodes", where
> > jobs will be able to use one of the two GPUs, plus (at most) half of
> > the CPUs. With SGE, we've put two queues on the nodes, but this
> > effectively prevents certain maintenance