On Mar 5, 2009, at 1:29 PM, Jeff Squyres wrote:
On Mar 5, 2009, at 1:54 AM, Sangamesh B wrote:
The fortran application I'm using here is the CPMD-3.11.
I don't think the processor is Nehalem:
Intel(R) Xeon(R) CPU X5472 @ 3.00GHz
Installation procedure was same on both the cluste
On Mar 5, 2009, at 1:54 AM, Sangamesh B wrote:
The fortran application I'm using here is the CPMD-3.11.
I don't think the processor is Nehalem:
Intel(R) Xeon(R) CPU X5472 @ 3.00GHz
Installation procedure was same on both the clusters. I've not set
mpi_affinity.
This is a memory
The fortran application I'm using here is the CPMD-3.11.
I don't think the processor is Nehalem:
Intel(R) Xeon(R) CPU X5472 @ 3.00GHz
Installation procedure was same on both the clusters. I've not set mpi_affinity.
This is a memory intensive application, but this job was not using
th
It would also help to have some idea how you installed and ran this -
e.g., did you set mpi_paffinity_alone so that the processes would bind to
their processors? That could explain the cpu vs. elapsed time since it
helps the processes from being swapped out as much.
Ralph
> Your Intel processors
Your Intel processors are I assume not the new Nehalem/I7 ones? The older
quad-core ones are seriously memory bandwidth limited when running a memory
intensive application. That might explain why using all 8 cores per node
slows down your calculation.
Why do you get such a difference between cp
Hi all,
Now LAM-MPI is also installed and tested the fortran application by
running with LAM-MPI.
But LAM-MPI is performing still worse than Open MPI
No of nodes:3 cores per node:8 total core: 3*8=24
CPU TIME :1 HOURS 51 MINUTES 23.49 SECONDS
ELAPSED TIME :7 HOURS 28 MINUTES
Dear All,
A fortran application is installed with Open MPI-1.3 + Intel
compilers on a Rocks-4.3 cluster with Intel Xeon Dual socket Quad core
processor @ 3GHz (8cores/node).
The time consumed for different tests over a Gigabit connected
nodes are as follows: (Each node has 8 GB memory).