Except sstat can give you the MaxRSS without having cgroups and it will
give you a simple MaxRSS, whereas sacct provides a MaxRSS for every
step... have to play with that data to get the high water mark grrr.
I had tried to use sstat in an epilogue but apparently that is too late...
Brian
Lol. Sure. All I did was require python3 and install it. (I prefer
python3 to python2 just because...)
You could probably do it with python2 if you prefer
--
65c65
< BuildRequires: python
---
> BuildRequires: python3
-
So, a few new things I had to do:
yum config-ma
Ah! I didn't realize sacct included that data, but I see the fields listed
now. Thanks!
__
*Jacob D. Chappell, CSM*
*Research Computing Associate*
Research Computing | Research Computing Infrastructure
Information Technology Services | University of
All the aggregate historic data should be accessible via sacct. sstat is
for live jobs but sacct is for completed jobs.
-Paul Edmon-
On 10/30/2019 2:13 PM, Jacob Chappell wrote:
Is there a simple way to store sstat information permanently on job
completion? We already have job accounting on, b
Yes, I'd be interested too.
Best,
Chris
--
Christopher Coffey
High-Performance Computing
Northern Arizona University
928-523-1167
On 10/30/19, 3:54 AM, "slurm-users on behalf of Andy Georges"
wrote:
Hi Brian,
On Mon, Oct 28, 2019 at 10:42:59AM -0700, Brian Andrus wrote:
Is there a simple way to store sstat information permanently on job
completion? We already have job accounting on, but the information
collected from cgroups doesn't seem to be stored once a job finishes (sstat
-j $JOB_ID on a dead job returns an error).
Thanks,
___
Fairshare is calculated based on an "association". If you look in the
manpage for sacctmgr under ENTITIES, you will see:
association
The entity used to group information consisting of
four parameters: account, cluster, partition (optional), and user.
Users can have en
Hi
Thanks for your reply. I enabled both of them but still the oldest job gets
suspended. Here is my slurm.conf if you can see something I can't.
0 MpiDefault=none
1 ProctrackType=proctrack/linuxproc
2 ReturnToService=1
3 SlurmctldPidFile=/var/run/slurmctld.pid
4 SlurmdPidFile=/var/run
Makes sense but in case I can guarantee no job will ever request more than
one partition isn’t there any work around to get fairshare calculated per
partition ?
Em ter, 29 de out de 2019 às 18:34, Christopher Samuel
escreveu:
> On 10/29/19 12:42 PM, Igor Feghali wrote:
>
> > fairshare is been ca
Hi Brian,
On Mon, Oct 28, 2019 at 10:42:59AM -0700, Brian Andrus wrote:
> Ok, I had been planning on getting around to it, so this prompted me to do
> so.
>
> Yes, I can get slurm 19.05.3 to build (and package) under CentOS 8.
>
> There are some caveats, however since many repositories and package
10 matches
Mail list logo