Another thought - are we getting mixed up between hyperthreaded and physical cores here? I don't see how 12 hyperthreaded cores translates to 8 though - it would be 6!
On 11 July 2018 at 10:30, John Hearns <hear...@googlemail.com> wrote: > Mahmood, > I am sure you have checked this. Try running ps -eaf --forest > while a job is running. > I often find the --forest option helps to understand how batch jobs are > being run. > > On 11 July 2018 at 09:12, Mahmood Naderan <mahmood...@gmail.com> wrote: > >> >Check the Gaussian log file for mention of its using just 8 CPUs-- just >> because there are 12 CPUs available doesn't mean the program uses all of >> >them. It will scale-back if 12 isn't a good match to the problem as I >> recall. >> >> >> >> Well, in the log file, it says >> >> ****************************************** >> %nprocshared=12 >> Will use up to 12 processors via shared memory. >> %mem=18GB >> %chk=trimer.chk >> >> Maybe, it scales down to a good match. But I haven't seen that before. >> That was why I asked the question. >> >> >> >> >> >> One more question. Does it matter if the user specify (or not specify) >> --account in the sbatch script? >> >> [root@rocks7 ~]# sacctmgr list association format=partition,account,user, >> grptres,maxwall >> Partition Account User GrpTRES MaxWall >> ---------- ---------- ---------- ------------- ----------- >> emerald z3 noor cpu=12,mem=1+ 30-00:00:00 >> >> >> >> [noor@rocks7 ~]$ grep nprocshared trimer.gjf >> %nprocshared=12 >> [noor@rocks7 ~]$ cat trimer.sh >> #!/bin/bash >> #SBATCH --output=trimer.out >> #SBATCH --job-name=trimer >> #SBATCH --ntasks=12 >> #SBATCH --mem=18GB >> #SBATCH --partition=EMERALD >> g09 trimer.gjf >> >> >> >> >> >> >> >> Regards, >> Mahmood >> >> >> >