> • Total number of jobs submitted by user (daily/weekly/monthly) > • Average queue time per user (daily/weekly/monthly) > • Average job run time per user (daily/weekly/monthly)
Open XDMoD for these three. https://github.com/ubccr/xdmod , plus https://xdmod.ccr.buffalo.edu (unfortunately their SSL certificate expired yesterday, so you’ll get a warning). > • %time partitions were in-use and idle Not sure how you’d want to define this, plus our partitions have substantial overlap on resources (our partitions are primarily to separate GPU or large memory jobs from others, and to balance priorities and limits on different classes of jobs). > • min/mx/avg number of nodes/cpus/mem used per user/job Open XDMoD for CPUs and nodes, and probably Open XDMoD plus SUPREMM for memory (haven’t used this one myself, but I plan to). -- Mike Renfro, PhD / HPC Systems Administrator, Information Technology Services 931 372-3601 / Tennessee Tech University > On Nov 26, 2019, at 10:02 AM, Ricardo Gregorio > <ricardo.grego...@rothamsted.ac.uk> wrote: > > External Email Warning > This email originated from outside the university. Please use caution when > opening attachments, clicking links, or responding to requests. > Hi all, > > I am new to both HPC and SLURM. > > I have been trying to run some usage reports (using sreport and sacct); but I > cannot find a way to get the following info: > > • Total number of jobs submitted by user (daily/weekly/monthly) > • Average queue time per user (daily/weekly/monthly) > • Average job run time per user (daily/weekly/monthly) > • %time partitions were in-use and idle > • min/mx/avg number of nodes/cpus/mem used per user/job > > Is this doable? > > Regards, > Ricardo Gregorio > Research and Systems Administrator > > > Rothamsted Research is a company limited by guarantee, registered in England > at Harpenden, Hertfordshire, AL5 2JQ under the registration number 2393175 > and a not for profit charity number 802038.