Hello Manuel,
One way I know of is to use scontrol with the '-d' option:
scontrol -d show job=<JOBID>
Then the GPU is listed in a line like this:
Nodes=node01 CPU_IDs=14 Mem=10240 GRES=gpu(IDX:2)
If there are other ways to achieve the same, I'd be interested as well.
Best,
Stephan
-------------------------------------------------------------------
Stephan Roth | ISG.EE D-ITET ETH Zurich | http://www.isg.ee.ethz.ch
+4144 632 30 59 | ETF D 104 | Sternwartstrasse 7 | 8092 Zurich
-------------------------------------------------------------------
On 23.04.20 10:48, Holtgrewe, Manuel wrote:
Dear all,
is it possible to find out which GPU was assigned to which job through
squeue or sacct?
My motivation is as follows: some users write jobs with bad resource
usage (e.g., 1h CPU to precompute, followed by 1h GPU to process, and so
on). I don't care so much about CPUs at the moment as that's not the
bottleneck at the moment but GPUs are.
What is the best way to approach this?
Best wishes,
--
Dr. Manuel Holtgrewe, Dipl.-Inform.
Bioinformatician
Core Unit Bioinformatics – CUBI
Berlin Institute of Health / Max Delbrück Center for Molecular Medicine
in the Helmholtz Association / Charité – Universitätsmedizin Berlin
Visiting Address: Invalidenstr. 80, 3rd Floor, Room 03 028, 10117 Berlin
Postal Address: Chariteplatz 1, 10117 Berlin
E-Mail: manuel.holtgr...@bihealth.de
Phone: +49 30 450 543 607
Fax: +49 30 450 7 543 901
Web: cubi.bihealth.org www.bihealth.org www.mdc-berlin.de www.charite.de