Michael and Reuti made some good suggestions.

- Don't confuse OpenMP with Open MPI.

- The OMPI_NUM_THREADS environment variable is an OpenMP thing, not an Open MPI thing, so it will have no effect here. You might as well remove it from your PBS script (it won't have any effect on LAM, either).

- Your PBS script is asking for 2 nodes, but only one process per nodes (-l nodes=2). It's been a while since I've used PBS/Torque regularly, but I think the right syntax is "-l nodes=2:ppn=2", which gives you 2 nodes, each with 2 processors per node (based on the context of your mail, this is what I assume you want -- but adjust the values as you see fit).

- A good test to see if this stuff is working properly is to get an interactive PBS job and play with running "hostname". For example, something like this:

        $ qsub -I -qdebug -lnodes=2:ppn=2
        [wait for shell prompt on the mother superior node]
        $ mpirun -np 4 hostname

You should see:

host1
host1
host2
host2

(replacing those with your hostnames, of course, and potentially mixing up the order because of stdio-ordering-lack-of-guarantees)

That will confirm that you're running on all 4 processors that PBS/ Torque allocated to you. If that works, then adjust your script accordingly and try it with your real application.

Hope that helps.


On Oct 12, 2006, at 8:23 AM, amane001 wrote:

Thanks for your reply. I actually meant OpenMPI (from open-mpi.org) and I have compiled that using

./configure FC=ifort F77=ifort F90=ifort --prefix=$OPENMP_DIR

I may be asking some dumb questions here, but I'm really a beginner to please bear with me.

~Amane


On 10/12/06, Reuti <re...@staff.uni-marburg.de> wrote: Hi,

Am 12.10.2006 um 09:52 schrieb amane001:

> Hello all,
>
> I recently switched to OpenMP from LAM-MPI for my code. I'm trying
> to run my test code with a PBS scheduler on our local cluster. The
> PBS script is shown below. When the job is executed however, only
> one CPU is used for running the test.exe. Another more confusing
> aspect is the fact that the following three lines highlighted in
> the code below. Even if I set the OMP_NUM_THREADS = 2, the print
> statement in the next line says it's value is 1.

is it your intention to mix OpenMPI and OpenMP? Which compilation
flags did you use for OpenMPI?

> Any ideas where I could be going wrong?
>
> Thank you all for your help in advance!
> ~Amane
>
> #!/bin/sh
>
> #PBS -e job.err
> #PBS -o job.log
> #PBS -m ae
> #PBS -q debug
> #PBS -l nodes=2
>
>
> cd $PBS_O_WORKDIR
> ##################################################
> setenv OMP_NUM_THREADS 2

setenv is csh, you are using (ba)sh.

export OMP_NUM_THREADS=2

but most likely you won't need it at all.

-- Reuti

> echo I hope you find the correct number of processors
> echo $OMP_NUM_THREADS
> ##################################################
> ######### above 3 lines produce the following output --
> # I hope you find the correct number of processors
> # 1
> ##################################################
> /usr/local/openmp- 1.1.1/bin/mpirun -np 2 ../source/test.exe  <
> input.dat>Output
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

Reply via email to