On Jan 26, 2013, at 11:18 PM, #YEO JINGJIE# wrote:
> So I should run the job as:
>
> /usr/lib64/openmpi/bin/mpirun -mca mca_component_show_load_errors 1 -n 16
> /opt/lammps-21Jan13/lmp_linux < zigzag.in
>
> Is that correct?
Yes, thanks - though for our purposes, why don't you simplify it to:
2 percent?
Have you logged into a compute node and run a simple top when the job is
running?
Are all the processes distributed across the CPU cores?
Are the processes being pinned properly to a core? Or are they hopping from
core to core?
Also make SURE all nodes havenooted with all cores online
Dear developers, I have met a rather weird problem . Our newly assembled
cluster is using openmpi-1.6.3. When doing hpcc test, no err is reported
, the network latency ,bandwith , are quite normal. but the GFlops which
using HPL and MPIFFT is too low , just 2% of the theoretical value.
The sys
So I should run the job as:
/usr/lib64/openmpi/bin/mpirun -mca mca_component_show_load_errors 1 -n 16
/opt/lammps-21Jan13/lmp_linux < zigzag.in
Is that correct?
Regards,
Jingjie Yeo
Ph.D. Student
School of Mechanical and Aerospace Engineering
Nanyang Technological University, Singapore
___