Hi Amjad, list

HPL has some quirks to install, as I just found out.
It can be done, though.
I had used a precompiled version of HPL on my Rocks cluster before,
but that version is no longer being distributed, unfortunately.

Go to the HPL "setup" directory,
and run the script "make_generic".
This will give you a Make.<arch> template file named Make.UNKNOWN.
You can rename this file "Make.whatever_arch_you_want",
copy it to the HPL top directory,
and edit it,
adjusting the important variable definitions to your system.

For instance, where it says:
CC           = mpicc
replace by:
CC           = /full/path/to/OpenMPI/bin/mpicc
and so on for ARCH, TOPdir, etc.
Some 4-6 variables only need to be changed.

These threads show two examples:

http://marc.info/?l=npaci-rocks-discussion&m=123264688212088&w=2
http://marc.info/?l=npaci-rocks-discussion&m=123163114922058&w=2

You will need also a BLAS (basic linear algebra subprograms) library.
You may have one already on your computer.
Do "locate libblas" and "locate libgoto" to search for it.

If you don't have BLAS, you can download the Goto BLAS library
and install it, which is what I did:

http://www.tacc.utexas.edu/resources/software/

The Goto BLAS is probably the fastest version of BLAS.
However, you can try also the more traditional BLAS from Netlib:

http://www.netlib.org/blas/

I found it easier to work with gcc and gfortran (i.e. both BLAS
and OpenMPI compiled with gcc and gfortran), than to use PGI or Intel
compilers.  However, I didn't try hard with PGI and Intel.

Read the HPL TUNNING file to learn how to change/adjust
the HPL.dat parameters.
The PxQ value gives you the number of processes for mpiexec.

***

The goal of benchmarking is to measure performance under heavy use
(on a parallel computer using MPI, in the HPL case).
However, other than performance measurements,
benchmark programs in general don't produce additional results.
For instance, HPL does LU factorization of matrices and solves
linear systems with an efficient parallel algorithm.
This by itself is great, and is one reason why it is the
Top500 benchmark:
http://en.wikipedia.org/wiki/TOP500 and http://www.top500.org/project/linpack .

However, within HPL the LU decomposition and the
linear system solution are not applied to any particular
concrete problem.
Only the time it takes to run each part of HPL really matters.
The matrices are made up of random numbers, if I remember right,
are totally synthetic, and don't mean anything physical.
Of course LU factorization has tons of applications, but the goal
of HPL is not to explore applications, it is just to measure performance
during the number crunching linear algebra operations using MPI.

HPL will make the case that your cluster is working,
and you can tell your professors that it works with
a performance that you can measure, some X Gflops (see the xhpl output).

However, if you want also to show to your professors
that your cluster can be used for applications,
you may want to run a real world MPI program, say,
in a research area of your college, be it computational chemistry,
weather forecast, electrical engineering, structural engineering,
fluid mechanics, genome research, seismology, etc.
Depending on which area it is,
you may find free MPI programs on the Internet.

My two cents,
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------

Ankush Kaul wrote:
let me describe what i want to do.

i had taken linux clustering as my final year engineering project as i m really iintrested in 0networking.

to tell de truth our college does not have any professor with knowledge of clustering.

the aim of our project was just to make a cluster, which we did. not we have to show and explain our project to the professors. so i want somethin to show them how de cluster works... some program or benchmarking s/w.

hope you got the problem.
and thanks again, we really appretiate you patience.


------------------------------------------------------------------------

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to