ent between jobs, and number of
> jobs). We had it on and it nearly ran us out of space on our database host.
> That said the data can be really useful depending on the situation.
>
> -Paul Edmon-
>
> On 8/7/2024 8:51 AM, Juergen Salk via slurm-users wrote:
> > Hi Steffen
Hi Steffen,
not sure if this is what you are looking for, but with
`AccountingStoreFlags=job_env´
set in slurm.conf, the batch job environment will be stored in the
accounting database and can later be retrieved with `sacct -j
--env-vars´
command.
We find this quite useful for debugging purp
Hi,
to my very best knowledge MaxRSS does report aggregated memory consumption
of all tasks but including all the shared libraries that the individual
processes uses, even though a shared library is only loaded into memory
once regardless of how many processes use it.
So shared libraries do count
Hi Alan,
unfortunately, process placement in Slurm is kind of black magic for
sub-node jobs, i.e. jobs that allocate only a small number of CPUs of
a node.
I have recently raised a similar question here:
https://support.schedmd.com/show_bug.cgi?id=19236
And the buttom line was, that to "reall
Hi Jason,
do or did you maybe have a reservation for user root in place?
sreport accounts resources reserved for a user as well (even if not
used by jobs) while sacct reports job accounting only.
Best regards
Jürgen
* Jason Simms via slurm-users [240429 10:47]:
> Hello all,
>
> Each week,
Hi Gerhard,
I am not sure if this counts as administrative measure, but we do
highly encourage our users to always explicitely specify --nodes=n
together with --ntasks-per-node=m (rather than just --ntasks=n*m and
omitting --nodes option, which may lead to cores allocated here and
there and eve