Hi Jeff,
I didn't have much time to test this morning. So, I checked it again
now. Then, the trouble seems to depend on the number of nodes to use.
This works(nodes < 4):
mpiexec -bynode -np 4 ./my_program && #PBS -l nodes=2:ppn=8
(OMP_NUM_THREADS=4)
This causes error(nodes >= 4):
mpiexec
Hi,
Am 19.03.2013 um 08:00 schrieb tmish...@jcity.maeda.co.jp:
> I didn't have much time to test this morning. So, I checked it again
> now. Then, the trouble seems to depend on the number of nodes to use.
>
> This works(nodes < 4):
> mpiexec -bynode -np 4 ./my_program && #PBS -l nodes=2:ppn
Hi Tetsuya Mishima
Mpiexec offers you a number of possibilities that you could try:
--bynode,
--pernode,
--npernode,
--bysocket,
--bycore,
--cpus-per-proc,
--cpus-per-rank,
--rankfile
and more.
Most likely one or more of them will fit your needs.
There are also associated flags to bind processe
Hi Reuti and Gus,
Thank you for your comments.
Our cluster is a little bit heterogeneous, which has nodes with 4, 8, 32
cores.
I used 8-core nodes for "-l nodes=4:ppn=8" and 4-core nodes for "-l
nodes=2:ppn=4".
(strictly speaking, Torque picked up proper nodes.)
As I mentioned before, I usuall
Hi Tetsuya
Your script that edits $PBS_NODEFILE into a separate hostfile
is very similar to some that I used here for
hybrid OpenMP+MPI programs on older versions of OMPI.
I haven't tried this in 1.6.X,
but it looks like you did and it works also.
I haven't tried 1.7 either.
Since we run producti
Hi Gus,
Thank you for your comments. I understand your advice.
Our script used to be --npernode type as well.
As I told before, our cluster consists of nodes having 4, 8,
and 32 cores, although it used to be homogeneous at the
starting time. Furthermore, since performance of each core
is almost