Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
http://www.pppl.gov
On 3/21/19 12:21 PM, Loris Bennett wrote:
Hi Ryan,
Ryan Novosielski <novos...@rutgers.edu> writes:
On Mar 21, 2019, at 11:26 AM, Prentice Bisbal <pbis...@pppl.gov> wrote:
On 3/20/19 1:58 PM, Christopher Samuel wrote:
On 3/20/19 4:20 AM, Frava wrote:
Hi Chris, thank you for the reply.
The team that manages that cluster is not very fond of upgrading SLURM, which I
understand.
As a system admin who manages clusters myself, I don't understand this. Our
job is to provide and maintain resources for our users. Part of that
maintenance is to provide updates for security, performance, and functionality
(new features) reasons. HPC has always been a leading-edge kind if field, so I
feel this is even more important for HPC admins.
Yes, there can be issues caused by updates, but those can be with proper
planning: Have a plan to do the actual upgrade, have a plan to test for
issues, and have a plan to revert to an earlier version if issues are
discovered. This is work, but it's really not all that much work, and this is
exactly the work we are being paid to do as cluster admins.
From my own experience, I find *not* updating in a timely manner is actually
more problematic and more work than keep on top of updates. For example, where
I work now, we still haven't upgraded to CentOS 7, and as a result, many basic
libraries are older than what many of the open-source apps my users need
require. As a result, I don't just have to install application X, I often have
to install up-to-date versions of basic libraries like libreadline, libcurses,
zlib, etc. And then there are the security concerns...
Chris, maybe you should look at EasyBuild
(https://easybuild.readthedocs.io/en/latest/). That way you can install
all the dependencies (such as zlib) as modules and be pretty much
independent of the ancient packages your distro may provide (other
software-building frameworks are available).
I think you meant to address that to me (Prentice), not Chris, as I was
the won who wrote that. I have been looking at EasyBuild and Spack, but
to be honest, in this situation, that doesn't address the root-cause of
the problem (lazy system administration/bad admin practices), or address
security issues resulting from using old OS versions. One problem we're
having right now is that we can't take advantage of precompiled binaries
because the version of libc on is too old (most recent software is built
on RHEL or CentOS 7). upgrading libc is much more complicated than a
library like libreadline, etc.
What I am working on is introducing automation/devops tools like
Cobbler, Puppet and other practices to make it easier to update the OS.
I've done this in the past, and upgrading from RHEL 5 to RHEL 6 (for
example), was very quick and painless.
Okay, rant over. I'm sorry. It just bothers me when I hear fellow system
admins aren't "very fond" of things that I think are a core responsbility of
our jobs. I take a lot of pride on my job.
All of those things take time, depending on where you work (not necessarily
speaking about my current employer/employment situation), you may be ordered to
do something else with that time. If so, all bets are off. Planned updates where
sufficient testing time is not allotted moves the associated work from planned
work to unplanned emergency (something broken, etc.), and in some cases from
business hours to off hours, generate lots of support queries, etc.
I’ve never seen a paycheck signed by “Best Practices”.
It may be true that some employers prioritise the wrong things, but in
my experience, Slurm is pretty easy and quick to update. It may seem a
little scary (people often seem to worry erroneously about loosing
everything in the queue), but we started with version 2.2.4 in 2012 and
have always updated regularly. We have both slurmctld and slurmdbd on
one machine, which is often advised against, but I have only ever had
one problem, which I was able to solve by using a backup of the spool
directory. Our last cluster only hit around 2.5 million jobs after
around 6 years, so database conversion was never an issue. For sites
with a higher-throughput things may be different, but I would hope that
at those places, the managers would know the importance of planned
updates and testing.
I agree 100% I find building RPMs from the .spec file included in the
tarball makes this stupid-easy, too. I inherited my current environment
about 3 years ago. Slurm was built from source and installed in
/usr/local on a shared NFS system. This made updating across the cluster
a bit difficult and tedious, but not prohibitively so. About a year ago
I switche us to using RPMs, and now every new update gets installed
within days/weeks of being released with very little effort.
People are always afraid of losing state or corrupting the slurm DB, but
that is easy to address: Just do a dump of the database on the slurmdbd
server, and then stop slurmctld and tar/gzip the state directories. That
way, if there is a problem with the upgrade process (DB gets hosed,
etc), you can revert your update. Both of these steps only take a few
minutes, and can easily be scripted.
--
Prentice