I can't test this right now, but possibly


squeue -j <jobid> -O 'name,nodes,tres-per-node,sct'


From squeue man page https://slurm.schedmd.com/squeue.html:

sct
    Number of requested sockets, cores, and threads (S:C:T) per node for the job. When (S:C:T) has not been set, "*" is displayed. (Valid for jobs only)

tres-per-node
    Print the trackable resources per node requested by the job or job step.


Again, can't test just now, so no idea if applicable to your use case.


On 10/18/19 9:51 PM, Mark Hahn wrote:
$ scontrol --details show job 1653838
JobId=1653838 JobName=v1.20
...
    Nodes=r00g01 CPU_IDs=31-35 Mem=5120 GRES_IDX=
    Nodes=r00n16 CPU_IDs=34-35 Mem=2048 GRES_IDX=
    Nodes=r00n20 CPU_IDs=12-17,30-35 Mem=12288 GRES_IDX=
    Nodes=r01n16 CPU_IDs=15 Mem=1024 GRES_IDX=

thanks for sharing this!  we've had a lot of discussion
on how to collect this information as well, even whether it would be worth doing in a prolog script...

regards,
--
Mark Hahn | SHARCnet Sysadmin | h...@sharcnet.ca | http://www.sharcnet.ca
          | McMaster RHPCS    | h...@mcmaster.ca | 905 525 9140 x24687
          | Compute/Calcul Canada                | http://www.computecanada.ca

Reply via email to