[slurm-users] multiple partitions with the sbatch command

2019-02-11 Thread Hossein Pourreza
Greetings, The sbatch documentation under the --partition option says: "If the job can use more than one partition, specify their names in a comma separate list" . When I use two partition names separated by comma I get "sbatch: error: Batch job submission failed: Invalid qos specification".

Re: [slurm-users] Does latest slurm version still work on CentOS 6?

2019-02-11 Thread Jason Bacon
On 2/11/19 10:26 AM, Colas Rivière wrote: Hello, I'm trying to update slurm to the latest stable version 18.08.5-2. Our cluster uses CentOS 6.8 and updating it tricky because of Lustre support. According to https://slurm.schedmd.com/platforms.html, CentOS 6 is still supported. However, `yum-

Re: [slurm-users] Does latest slurm version still work on CentOS 6?

2019-02-11 Thread Prentice Bisbal
Also, make sure no 3rd party packages installed software that installs files in the systemd directories. The legacy spec file still checks for systemd files to be present: if [ -d /usr/lib/systemd/system ]; then    install -D -m644 etc/slurmctld.service $RPM_BUILD_ROOT/usr/lib/systemd/system/s

Re: [slurm-users] Does latest slurm version still work on CentOS 6?

2019-02-11 Thread Michael Robbert
Cola, You need to use the legacy spec file from the contribs directory: ls -l slurm-18.08.5/contribs/slurm.spec-legacy -rw-r--r-- 1 mrobbert mrobbert 38574 Jan 30 11:59 slurm-18.08.5/contribs/slurm.spec-legacy Mike On 2/11/19 9:26 AM, Colas Rivière wrote: > Hello, > > I'm trying to update slur

[slurm-users] Does latest slurm version still work on CentOS 6?

2019-02-11 Thread Colas Rivière
Hello, I'm trying to update slurm to the latest stable version 18.08.5-2. Our cluster uses CentOS 6.8 and updating it tricky because of Lustre support. According to https://slurm.schedmd.com/platforms.html, CentOS 6 is still supported. However, `yum-builddep slurm-18.08.5-2/slurm.spec` fails,

Re: [slurm-users] Slurm configuration on multi computers with ldap and dedicated resources

2019-02-11 Thread Renfro, Michael
I’m assuming you have LDAP and Slurm already working on all your nodes, and want to restrict access to two of the nodes based off of Unix group membership, while letting all users access the rest of the nodes. If that’s the case, you should be able to put the two towers into a separate partitio

Re: [slurm-users] Recording variables

2019-02-11 Thread Bjørn-Helge Mevik
... and to force users to supply --account, something like this in the job_submit.lua should work: -- If account is missing: fail if job_desc.account == nil then slurm.log_info("job from uid %d with missing account: Denying.", job_desc.user_id) slurm.user_msg

Re: [slurm-users] Slurm configuration on multi computers with ldap and dedicated resources

2019-02-11 Thread Jean-Sébastien Lerat
Dear, Can someone help me to configure several machines under Slurm with different rights per user group? Or can someone redirect me to a tuoriel that explains these different points? Regards, Jean-Sébastien Le lun. 28 janv. 2019 à 17:52, Jean-Sébastien Lerat a écrit : > Hi, > > I have two to

Re: [slurm-users] How to partition nodes into smaller units

2019-02-11 Thread Ansgar Esztermann-Kirchner
Hi, > On 05.02.19 16:46, Ansgar Esztermann-Kirchner wrote: > > [...]-- we'd like to have two "half nodes", where > > jobs will be able to use one of the two GPUs, plus (at most) half of > > the CPUs. With SGE, we've put two queues on the nodes, but this > > effectively prevents certain maintenance