> On Feb 5, 2018, at 4:11 PM, Kilian Cavalotti 
> <kilian.cavalotti.w...@gmail.com> wrote:
> 
> Hi Ryan,
> 
> On Mon, Feb 5, 2018 at 8:06 AM, Ryan Novosielski <novos...@rutgers.edu> wrote:
>> We currently use SLURM 16.05.10 and one of our staff asked how they
>> can check for allocated GPUs, as you might check allocated CPUs by
>> doing scontrol show node. I could have sworn that you can see both,
>> but I see that only CPUs is visible. One of our staff recommended
>> using sacct to see it. Is there a better way?
> 
>    scontrol -d show node <nodename>
> 
> will display a GresUsed line, that shows how many of those Gres are
> allocated, if that's what you're looking for. You'll get a count, but
> not individual ids, AFAIK.

Exactly what we were looking for, thank you.

PS: Anyone know what GresDrain is? Sounds like a feature we’ve been looking for 
but believed not to exist (offlining of some number of GRES). I’m assuming it’s 
there but not implemented?

[novosirj@perceval1 install-files]$ scontrol -d show node cuda001
NodeName=cuda001 Arch=x86_64 CoresPerSocket=12
   CPUAlloc=24 CPUErr=0 CPUTot=24 CPULoad=24.03
   AvailableFeatures=(null)
   ActiveFeatures=(null)
   Gres=gpu:4
   GresDrain=N/A
   GresUsed=gpu:4
   NodeAddr=cuda001 NodeHostName=cuda001 Version=16.05
   OS=Linux RealMemory=128241 AllocMem=124000 FreeMem=105469 Sockets=2 Boards=1
   State=ALLOCATED ThreadsPerCore=1 TmpDisk=0 Weight=4 Owner=N/A MCS_label=N/A
   BootTime=2018-01-27T23:01:42 SlurmdStartTime=2018-01-28T00:17:45
   CapWatts=n/a
   CurrentWatts=0 LowestJoules=0 ConsumedJoules=0
   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s

--
____
|| \\UTGERS,     |---------------------------*O*---------------------------
||_// the State  |         Ryan Novosielski - novos...@rutgers.edu
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\    of NJ  | Office of Advanced Research Computing - MSB C630, Newark
     `'

Attachment: signature.asc
Description: Message signed with OpenPGP

Reply via email to