In addition to checking under /sys/fs/cgroup like Tim said, if this is just to 
convince yourself that you got the CPU restriction working, you could also open 
`top` on the host running the job and observe that %CPU is now being held to 
200,0 or lower (or if its multiple processes per job, summing to that) instead 
of 4800 or whatever all the cores would be.


________________________________________
Od: Cutts, Tim via slurm-users <slurm-users@lists.schedmd.com>
Poslano: sreda, 26. marec 2025 07:32
Za: Gestió Servidors; slurm-users@lists.schedmd.com
Zadeva: [slurm-users] Re: Using more cores/CPUs that requested with

Cgroups don’t take effect until the job has started;.  It’s a bit clunky, but 
you can do things like this

inspect_job_cgroup_memory ()
{
    set -- $(squeue "$@" -O JobId,UserName | sed -n '$p');
    sudo -u $2 srun --pty --jobid "$1" bash -c 'cat 
/sys/fs/cgroup/memory/slurm/uid_$(id 
-u)/job_${SLURM_JOB_ID}/memory.usage_in_bytes'
}

There are lots of other files in that filesystem hierarchy to report on other 
things like cpusets, IO etc.

Obviously if you’re not the admin of the system, you can only do this for your 
own jobs, and then you don’t need the sudo part of the shell function.

Tim

[...]


From: Gestió Servidors via slurm-users <slurm-users@lists.schedmd.com>
Date: Wednesday, 26 March 2025 at 7:50 am
To: slurm-users@lists.schedmd.com <slurm-users@lists.schedmd.com>
Subject: [slurm-users] Re: Using more cores/CPUs that requested with

Hello,

Thanks for your answers. I will try now!! One more question: is there any way 
to check if Cgroups restrictions is working fine during a “running” job or 
during SLURM scheduling process?

Thanks again!

________________________________

[...]

-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to