Dear Mahmood,

could you please show the output of

scontrol show -d job 119

Best
Marcus

On 12/16/19 5:41 PM, Mahmood Naderan wrote:
Excuse me, still I have problem. Although I freed memory on the nodes as below

   RealMemory=64259 AllocMem=1024 FreeMem=61882 Sockets=32 Boards=1
   RealMemory=120705 AllocMem=1024 FreeMem=115257 Sockets=32 Boards=1
   RealMemory=64259 AllocMem=26624 FreeMem=61795 Sockets=32 Boards=1
   RealMemory=64259 AllocMem=1024 FreeMem=51937 Sockets=10 Boards=1

still the job is in PD (resources).

$ squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)                119       SEA    qe-fb  mahmood PD       0:00      4 (Resources)
$ cat slurm_qe.sh
#!/bin/bash
#SBATCH --job-name=qe-fb
#SBATCH --output=my_fb.log
#SBATCH --partition=SEA
#SBATCH --account=fish
#SBATCH --mem=10GB
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=5

mpirun -np $SLURM_NTASKS /share/apps/q-e-qe-6.5/bin/pw.x -in f_borophene_scf.in <http://f_borophene_scf.in>



Regards,
Mahmood




On Mon, Dec 16, 2019 at 10:35 AM Kraus, Sebastian <sebastian.kr...@tu-berlin.de <mailto:sebastian.kr...@tu-berlin.de>> wrote:

    Sorry Mahmood,

    10 GB per node is requested not 200 GB per node. For all nodes
    this counts in total to 40 GB as you request 4 nodes. The number
    of tasks per node does not matter for this limit.


    Best ;-)
    Sebastian


--
Marcus Wagner, Dipl.-Inf.

IT Center
Abteilung: Systeme und Betrieb
RWTH Aachen University
Seffenter Weg 23
52074 Aachen
Tel: +49 241 80-24383
Fax: +49 241 80-624383
wag...@itc.rwth-aachen.de
www.itc.rwth-aachen.de

Reply via email to