What does scontrol show partition EMERALD give you? I’m assuming its 
AllowAccounts output won’t match your /etc/slurm/parts settings.

> On Dec 2, 2018, at 12:34 AM, Mahmood Naderan <mahmood...@gmail.com> wrote:
> 
> Hi
> Although I have created an account and associated that to a partition, but 
> the submitted job remains in PD with an error that the account is not allowed 
> in this partition.
> 
> Please see the output below:
> 
> 
> [root@rocks7 mahmood]# sacctmgr list association 
> format=account,user,partition,association,grptres,maxwall | grep z33
>        z33
>        z33      azimi    emerald            cpu=24,mem=1+ 30-00:00:00
> [root@rocks7 mahmood]#
> [root@rocks7 mahmood]# cat /etc/slurm/parts
> PartitionName=WHEEL RootOnly=yes Priority=1000 Nodes=ALL
> PartitionName=DIAMOND AllowAccounts=monthly Nodes=compute-0-[0-1]
> PartitionName=EMERALD AllowAccounts=em1,z1,z2,em4,z3,z33,z5,z9 
> Nodes=compute-0-[2-3],rocks7
> [root@rocks7 mahmood]#  systemctl restart slurmd
> [root@rocks7 mahmood]#  systemctl restart slurmctld
> [root@rocks7 mahmood]# su - azimi
> Last login: Sun Dec  2 09:55:21 +0330 2018 from 192.168.250.1 on pts/12
> [azimi@rocks7 ~] $ cd OpenFOAM/azimi-1.7.1/run/convdiver/
> [azimi@rocks7 convdiver] $ cat slurm_convdiver.sh | head -n 7
> #!/bin/bash
> #SBATCH --job-name=convdiver
> #SBATCH --output=convdiver
> #SBATCH --partition=EMERALD
> #SBATCH --account=z33
> #SBATCH --mem=18GB
> #SBATCH --ntasks=12
> [azimi@rocks7 convdiver]$ sbatch slurm_convdiver.sh
> Submitted batch job 1759
> [azimi@rocks7 convdiver]$ squeue
>              JOBID PARTITION     NAME     USER ST       TIME  NODES 
> NODELIST(REASON)
>               1759   EMERALD convdive    azimi PD       0:00      1 (Job's 
> account not permitted to use this partition (EMERALD allows 
> em1,z1,z2,em4,z3,z5,z9 not z33))
> 
> 
> 
> 
> Any guess?
> 
> 
> Regards,
> Mahmood
> 
> 

Reply via email to