We are pleased to announce the availability of Slurm version 15.08.0-rc1
(release candidate 1). This version contains all of the features intended for
release 15.08 (with the exception of some minor burst buffer work) and we are
moving into a testing phase. You are invited to download this version
and assist
in testing. Some highlights in this release include:
-- Add TRES (Trackable resources) to track utilization of memory, GRES, burst
buffer, license, and any other configurable resources in the accounting
database.
-- Add configurable billing weight that takes into consideration any
TRES when
calculating a job's resource utilization (for fair-share calculation).
-- Add configurable prioritization factor that takes into consideration any
TRES when calculating a job's resource utilization.
-- Add burst buffer support infrastructure. Currently available
plugin include
burst_buffer/generic (uses administrator supplied programs to manage file
staging) and burst_buffer/cray (uses Cray APIs to manage buffers).
-- Add support for job dependencies joined with OR operator (e.g.
"--depend=afterok:123?afternotok:124").
-- Add advance reservation flag of "replace" that causes allocated resources
to be replaced with idle resources. This maintains a pool of available
resources that maintains a constant size (to the extent possible).
-- Permit PreemptType=qos and PreemptMode=suspend,gang to be used together.
A high-priority QOS job will now oversubscribe resources and gang
schedule,
but only if there are insufficient resources for the job to be started
without preemption. NOTE: That with PreemptType=qos, the partition's
Shared=FORCE:# configuration option will permit one job more per resource
to be run than than specified, but only if started by preemption.
-- A partition can now have an associated QOS. This will allow a partition
to have all the limits a QOS has. If a limit is set in both QOS
the partition QOS will override the job's QOS unless the job's QOS has the
'PartitionQOS' flag set.
-- Expanded --cpu-freq parameters to include min-max:governor specifications.
--cpu-freq now supported on salloc and sbatch.
-- Add support for optimized job allocations with respect to SGI Hypercube
and dragonfly network topologies.
-- Add the ability for a compute node to be allocated to multiple jobs, but
restricted to a single user. Added "--exclusive=user" option to salloc,
the scontrol and sview commands. Added new partition
configuration parameter
"ExclusiveUser=yes|no".
-- Added plugin to record job completion information using Elasticsearch.
-- Modify slurmctld outgoing RPC logic to support more parallel tasks (up to
85 RPCs and 256 pthreads; the old logic supported up to 21 RPCs and 256
threads).
Slurm downloads are available from:
http://www.schedmd.com/#repos
If you would like to find out more about these new features and others, please
join us at the Slurm User Group meeting:
http://slurm.schedmd.com/slurm_ug_agenda.html
--
Morris "Moe" Jette
CTO, SchedMD LLC
Commercial Slurm Development and Support
===============================================================
Slurm User Group Meeting, 15-16 September 2015, Washington D.C.
http://slurm.schedmd.com/slurm_ug_agenda.html