Dear All,

    A fortran application is installed with Open MPI-1.3 + Intel
compilers on a Rocks-4.3 cluster with Intel Xeon Dual socket Quad core
processor @ 3GHz (8cores/node).

    The time consumed for different tests over a Gigabit connected
nodes are as follows: (Each node has 8 GB memory).

No of Nodes used:6  No of cores used/node:4 total mpi processes:24
       CPU TIME :    1 HOURS 19 MINUTES 14.39 SECONDS
   ELAPSED TIME :    2 HOURS 41 MINUTES  8.55 SECONDS

No of Nodes used:6  No of cores used/node:8 total mpi processes:48
       CPU TIME :    4 HOURS 19 MINUTES 19.29 SECONDS
   ELAPSED TIME :    9 HOURS 15 MINUTES 46.39 SECONDS

No of Nodes used:3  No of cores used/node:8 total mpi processes:24
       CPU TIME :    2 HOURS 41 MINUTES 27.98 SECONDS
   ELAPSED TIME :    4 HOURS 21 MINUTES  0.24 SECONDS

But the same application performs well on another Linux cluster with
LAM-MPI-7.1.3

No of Nodes used:6  No of cores used/node:4 total mpi processes:24
CPU TIME :    1hours:30min:37.25s
ELAPSED TIME  1hours:51min:10.00S

No of Nodes used:12  No of cores used/node:4 total mpi processes:48
CPU TIME :    0hours:46min:13.98s
ELAPSED TIME  1hours:02min:26.11s

No of Nodes used:6  No of cores used/node:8 total mpi processes:48
CPU TIME :     1hours:13min:09.17s
ELAPSED TIME  1hours:47min:14.04s

So there is a huge difference between CPU TIME & ELAPSED TIME for Open MPI jobs.

Note: On the same cluster Open MPI gives better performance for
inifiniband nodes.

What could be the problem for Open MPI over Gigabit?
Any flags need to be used?
Or is it not that good to use Open MPI on Gigabit?

Thanks,
Sangamesh

Reply via email to