Re: [slurm-users] Number of allocated cores/threads ..

2022-12-13 Thread Sefa Arslan
=0-31 Mem=163840 GRES= > Nodes=sh03-01n48 CPU_IDs=3-30 Mem=143360 GRES= > Nodes=sh03-01n53 CPU_IDs=7-14,24-25,27-30 Mem=71680 GRES= > Nodes=sh03-01n59 CPU_IDs=0-1,6-7,10-23 Mem=92160 GRES= > > Cheers, > -- > Kilian > > On Mon, Dec 12, 2022 at 4:01 AM Sefa Arsla

[slurm-users] Number of allocated cores/threads ..

2022-12-12 Thread Sefa Arslan
Hi All, Is there a way to find the number of allocated cores on a node for a particular multinode job? squeue or sacct give only the min core per node or total number of cores of a job. Regards, Sefa..

Re: [slurm-users] reseting SchedNodeList

2020-03-23 Thread Sefa Arslan
Thanks Paul. Holding and releasing or re-queueing the job didn,t clear the SchedNodeList value, due to bacfilling mechanism. I could clear it by restarting slurmctdl only. Sefa Arslan Paul Edmon , 23 Mar 2020 Pzt, 16:25 tarihinde şunu yazdı: > You could try holding the job and the releas

[slurm-users] reseting SchedNodeList

2020-03-23 Thread Sefa Arslan
Hi, Due to lack of source in a partition, I updated the job to another partition and increased the priority to top value. Although there are enough source for the job to be started, updated jobs have not started yet. When I looked using "scontrol check jobid", I saw the SchedNodeList value is no

[slurm-users] YNT: Available gpus ?

2018-03-16 Thread sefa . arslan
You msy use sinfo with parameters i.e. %G . https://slurm.schedmd.com/sinfo.html Sefa Arslan -- Orijinal mesaj--Kimden: jayraj shahTarih: Cum, 16 Mar 2018 21:18Kime: slurm-us...@schedmd.com;Konu:[slurm-users] Available gpus ? I am trying to find out how as a user, I can get information

[slurm-users] sshare raw usage

2018-03-13 Thread Sefa Arslan
Hi, We have  information of the every jobs historically in slurmdb and job_completions logs. We want to rebuild assosc_usage file  ( and other related files) from the beginning..  Is it possible? Current slurm version is 17.11.2 Regards.. Sefa ARSLAN

Re: [slurm-users] GPU allocation problems

2018-03-12 Thread Sefa Arslan
195:1 rwm' for '/sys/fs/cgroup/devices/slurm/uid_1487/job_114097' ... I have  read some post about constraintDevices and some bugs fixes on slurm-17.11.0, and then I upgrade slurm to 17.11.04, the above log belongs to slurm 17.11.04. Regards Sefa ARSLAN > Hi, > > This is jus

[slurm-users] GPU allocation problems

2018-03-12 Thread Sefa Arslan
the gpu configuration from Gres=gpu:2 to Gres=gpu:no_consume:2 to be able to  use simultaneously by many jobs, system let me use all cards independent of how many cards I request.. Regards, Sefa ARSLAN