Marcus, maybe you can try playing with --mem instead? We recommend our users to
use --mem instead of --mem-per-cpu/task as it It makes it easier for users to
request the right amount of memory for the job. --mem is the amount of memory
for the whole job. This way, there is no multiplying of memo
One thing, I forgot.
On 8/20/19 4:58 PM, Christopher Benjamin Coffey wrote:
Hi Marcus,
What is the reason to add "--mem-per-cpu" when the job already has exclusive
access to the node?
The user (normally) does not set --exclusive directly. We have several
accounts, whose jobs by default should
Hi Chris,
it is not my intention, to do such a job. I'm just trying to reconstruct
a bad behaviour. My users are doing such jobs.
The output of job 2 was a bad example as I saw later, that the job was
not running already. That output changes for a running job. It more
looks like:
NumNode
Hi Marcus,
What is the reason to add "--mem-per-cpu" when the job already has exclusive
access to the node? Your job has access to all of the memory, and all of the
cores on the system already. Also note, for non-mpi code like single core job,
or shared memory threaded job, you want to ask for
Just made another test.
Thanks god, the exclusivity is not "destroyed" completely, only on job
can run on the node, when the job is exclusive. Nonetheless, this is
somewhat unintuitive.
I wonder, if that also has an influence on the cgroups and the process
affinity/binding.
I will do some m
Hi Folks,
I think, I've stumbled over a BUG in Slurm regarding the exclusiveness.
Might also, I've misinterpreted something. I would be happy, if someone
could explain that to me in the latter case.
To the background. I have set PriorityFlags=MAX_TRES
The TRESBillingWeights are "CPU=1.0,Mem=