[slurm-users] Re: Find out submit host of past job?

2024-08-07 Thread Juergen Salk via slurm-users
ent between jobs, and number of > jobs). We had it on and it nearly ran us out of space on our database host. > That said the data can be really useful depending on the situation. > > -Paul Edmon- > > On 8/7/2024 8:51 AM, Juergen Salk via slurm-users wrote: > > Hi Steffen

[slurm-users] Re: Find out submit host of past job?

2024-08-07 Thread Juergen Salk via slurm-users
Hi Steffen, not sure if this is what you are looking for, but with `AccountingStoreFlags=job_env´ set in slurm.conf, the batch job environment will be stored in the accounting database and can later be retrieved with `sacct -j --env-vars´ command. We find this quite useful for debugging purp

[slurm-users] Re: maxrss reported by sachet is wrong

2024-06-07 Thread Juergen Salk via slurm-users
Hi, to my very best knowledge MaxRSS does report aggregated memory consumption of all tasks but including all the shared libraries that the individual processes uses, even though a shared library is only loaded into memory once regardless of how many processes use it. So shared libraries do count

[slurm-users] Re: cpu distribution question

2024-06-07 Thread Juergen Salk via slurm-users
Hi Alan, unfortunately, process placement in Slurm is kind of black magic for sub-node jobs, i.e. jobs that allocate only a small number of CPUs of a node. I have recently raised a similar question here: https://support.schedmd.com/show_bug.cgi?id=19236 And the buttom line was, that to "reall

[slurm-users] Re: Trying to Track Down root Usage

2024-04-29 Thread Juergen Salk via slurm-users
Hi Jason, do or did you maybe have a reservation for user root in place? sreport accounts resources reserved for a user as well (even if not used by jobs) while sacct reports job accounting only. Best regards Jürgen * Jason Simms via slurm-users [240429 10:47]: > Hello all, > > Each week,

[slurm-users] Re: Avoiding fragmentation

2024-04-09 Thread Juergen Salk via slurm-users
Hi Gerhard, I am not sure if this counts as administrative measure, but we do highly encourage our users to always explicitely specify --nodes=n together with --ntasks-per-node=m (rather than just --ntasks=n*m and omitting --nodes option, which may lead to cores allocated here and there and eve