Hi Sangamesh,
I'd look into making sure that the node you are using is not running
anything in parallel.
Make sure you allocate a whole node and it is clean from previous jobs.
Best,
INK
Hello INK,
I've run couple of jobs with different mpirun options.
CRITERIA 1:
On one of the nodes - connected to infiniband network:
Job No 1:
mpirun command: /opt/mpi/openmpi/1.3/intel/bin/mpirun --mca btl
^openib -np $NSLOTS -hostfile $TMPDIR/machines
/opt/apps/cpmd/3.11/ompi-atl
as/SOUR
Hi Sangamesh,
As far as I can tell there should be no difference if you run CPMD on a
single node whether with or without ib. One easy thing that you could do is
to repeat your runs on the infiniband node(s) with and without infiniband
using --mca btl ^tcp and --mca btl ^openib respectively. But si
Hello Ralph & Jeff,
This is the same issue - but this time the job is running on a single node.
The two systems on which the jobs are run, have the same hardware/OS
configuration. The only differences are:
One node has 4 GB RAM and it is part of infiniband connected nodes.
The other node ha
It depends on the characteristics of the nodes in question. You
mention the CPU speeds and the RAM, but there are other factors as
well: cache size, memory architecture, how many MPI processes you're
running, etc. Memory access patterns, particularly across UMA
machines like clovertown an