Slurm version 16.05.3 is now available and includes about 30 bug fixes
developed over the past few weeks. We have also relesed the first pre-release of version 17.02, which is under development and scheduled for release in
February 2017. A description of the changes in each version is appended.

Slurm downloads are available from:
http://www.schedmd.com/#repos

* Changes in Slurm 16.05.3
==========================
 -- Make it so the extern step uses a reverse tree when cleaning up.
-- If extern step doesn't get added into the proctrack plugin make sure the
    sleep is killed.
-- Fix areas the slurmctld can segfault if an extern step is in the system
    cleaning up on a restart.
-- Prevent possible incorrect counting of GRES of a given type if a node has the multiple "types" of a given GRES "name", which could over-subscribe
    GRES of a given type.
-- Add web links to Slurm Diamond Collectors (from Harvard University) and
    collectd (from EDF).
 -- Add job_submit plugin for the "reboot" field.
-- Make some more Slurm constants (INFINITE, NO_VAL64, etc.) available to
    job_submit/lua plugins.
-- Send in a -1 for a taskid into spank_task_post_fork for the extern_step. -- MYSQL - Sightly better logic if a job completion comes in with an end time
    of 0.
-- task/cgroup plugin is configured with ConstrainRAMSpace=yes, then set soft memory limit to allocated memory limit (previously no soft limit was set). -- Document limitations in burst buffer use by the salloc command (possible
    access problems from a login node).
 -- Fix proctrack plugin to only add the pid of a process once
    (regression in 16.05.2).
-- Fix for sstat to print correct info when requesting jobid.batch as part of
    a comma-separated list.
-- CRAY - Fix issue if pid has already been added to another job container.
 -- CRAY - Fix add of extern step to AELD.
-- burstbufer/cray: avoid batch submit error condition if waiting for stagein. -- CRAY - Fix for reporting steps lingering after they are already finished. -- Testsuite - fix test1.29 / 17.15 for limits with values above 32-bits.
 -- CRAY - Simplify when a NHC is called on a step that has unkillable
    processes.
-- CRAY - If trying to kill a step and you have NHC_NO_STEPS set run NHC
    anyway to attempt to log the backtraces of the potential
    unkillable processes.
-- Fix gang scheduling and license release logic if single node job killed on
    bad node.
 -- Make scontrol show steps show the extern step correctly.
 -- Do not scheduled powered down nodes in FAILED state.
-- Do not start slurmctld power_save thread until partition information is read in order to prevent race condition that can result invalid pointer when
    trying to resolve configured SuspendExcParts.
-- Add SLURM_PENDING_STEP id so it won't be confused with SLURM_EXTERN_CONT.
 -- Fix for core selection with job --gres-flags=enforce-binding option.
Previous logic would in some cases allocate a job zero cores, resulting in
    slurmctld abort.
-- Minimize preempted jobs for configurations with multiple jobs per node. -- Improve partition AllowGroups caching. Update the table of UIDs permitted to use a partition based upon it's AllowGroups configuration parameter as new valid UIDs are found rather than looking up that user's group information for every job they submit. If the user is now allowed to use the partition,
    then do not check that user's group access again for 5 seconds.
 -- Add routing queue information to Slurm FAQ web page.
-- Do not select_g_step_finish() a SLURM_PENDING_STEP step, as nothing has
    been allocated for the step yet.
 -- Fixed race condition in PMIx Fence logic.
-- Prevent slurmctld abort if job is killed or requeued while waiting for
    reboot of its allocated compute nodes.
-- Treat invalid user ID in AllowUserBoot option of knl.conf file as error
    rather than fatal (log and do not exit).
-- qsub - When doing the default output files for an array in qsub style
    make them using the master job ID instead of the normal job ID.
-- Create the extern step while creating the job instead of waiting until the
    end of the job to do it.
-- Always report a 0 exit code for the extern step instead of being canceled
    or failed based on the signal that would always be killing it.
 -- Fix to allow users to update QOS of pending jobs.
 -- Print correct cluster name in "slurmd -C" output.
 -- CRAY - Fix minor memory leak in switch plugin.
 -- CRAY - Change slurmconfgen_smw.py to skip over disabled nodes.
 -- Fix eligible_time for elasticsearch as well as add queue_wait
    (difference between start of job and when it was eligible).


* Changes in Slurm 17.02.0pre1
==============================
-- burst_buffer/cray - Add support for rounding up the size of a buffer reqeust
    if the DataWarp configuration "equalize_fragments" is used.
 -- Remove AIX support.
-- Rename "in" to "input" in slurm_step_io_fds data structure defined in slurm.h. This is needed to avoid breaking Python with by using one of its
    keywords in a Slurm data structure.
 -- Remove eligible_time from jobcomp/elasticsearch.
 -- Fix issue where if no clusters were added but yet a QOS needed to be
    deleted make it possible.
-- SlurmDBD - change all timestamps to bigint from int to solve Y2038 problem. -- Add salloc/sbatch/srun --spread-job to distribute tasks over as many nodes as possible. This also treats the --ntasks-node-node option as a maximum
    value.
 -- Add ConstrainKmemSpace to cgroup.conf, defaulting to yes, to allow
cgroup Kmem enforcement to be disabled while still using ConstrainRAMSpace.
 -- Add support for sbatch --bbf option.
-- Add burst buffer support for job arrays. Add new SchedulerParameters option of bb_array_stage_cnt=# to indicate how many pending tasks of a job array
    should be made available for burst buffer resource allocation.
 -- Fix small memory leak when a job fails to load from state save.
-- Fix invalid read when attempting to delete clusters from db with running
    jobs.
 -- Fix small memory leak when deleting clusters from db.
-- Add SLURM_ARRAY_TASK_COUNT environment variable. Total number of tasks in a
    job array (e.g. "--array=2,4,8" will set SLURM_ARRAY_TASK_COUNT=3).
-- Add new sacctmgr commands: "shutdown" (shutdown the server), "list stats"
    (get server statistics) "clear stats" (clear server statistics).
-- Restructure job accounting query to use 'id_job in (1, 2, .. )' format
    instead of logically equivalent 'id_job = 1 || id_job = 2 || ..' .
 -- Added start_delay field to jobcomp/elasticsearch.
-- In order to support federated jobs, the MaxJobID configuration parameter default value has been reduced from 2,147,418,112 to 67,043,328 and its maximum value is now 67,108,863. Upon upgrading, any pre-existing jobs that have a job ID above the new range will continue to run and new jobs will get
    job IDs in the new range.
-- Added infrastructure for setting up federations in database and establishing
    connections between federation clusters.

Reply via email to