I have a SLURM configuration of 2 hosts with 6 + 4 CPUs.
I am submitting jobs with sbatch -n <CPU slots> <job script>.
However, I see that even when I have exhausted all 10 CPU slots for the running 
jobs it's still allowing subsequent jobs to run !
The CPU slots availability is also show as full for the 2 hosts. No job is 
found pending.
What could be problem?
My Slurm.conf looks like (host names are changed to generic):
ClusterName=MyClusterControlMachine=host1ControlAddr=<some 
address>SlurmUser=slurmsa#AuthType=auth/mungeStateSaveLocation=/var/spool/slurmdSlurmdSpoolDir=/var/spool/slurmdSlurmctldLogFile=/var/log/slurm/slurmctld.logSlurmdDebug=3SlurmctldDebug=6SlurmdLogFile=/var/log/slurm/slurmd.logAccountingStorageType=accounting_storage/slurmdbdAccountingStorageHost=host1#AccountingStoragePass=medslurmpass#AccountingStoragePass=/var/run/munge/munge.socket.2AccountingStorageUser=slurmsa#TaskPlugin=task/cgroupNodeName=host1
 CPUs=6 SocketsPerBoard=3 CoresPerSocket=2 ThreadsPerCore=1 
State=UNKNOWNNodeName=host2 CPUs=4 ThreadsPerCore=1 
State=UNKNOWNPartitionName=debug Nodes=host1,host2 Default=YES MaxTime=INFINITE 
State=UPJobAcctGatherType=jobacct_gather/linuxJobAcctGatherFrequency=30
SelectType=select/cons_tresSelectTypeParameters=CR_CPUTaskPlugin=task/affinity
Thanks in advance for any help!
Regards,Bhaskar.
-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to