Lee Amy wrote:

I build a Kerrighed Clusters

Like Lenny, I'm not familiar with such clusters, but...

with 4 nodes so they look like a big SMP
machine. every node has 1 processor with dingle core.

1) Dose MPI programs could be running on such kinds of machine? If
yes, could anyone show me some examples?
From what I understand, the answer is "yes".

MPI applications can run on clusters. A special case of this is that an MPI application could run on a single node or SMP.

If I understand correctly, Kerrighed clusters are ways of making a cluster look like a single SMP for the sake of ease of use (first objective listed on Kerrighed home page) and other benefits.

So, running MPI on Kerrighed should "just work" -- it's just that most benefits of having Kerrighed simply don't benefit MPI programs.

Running an MPI program on Kerrighed, if I understand correctly, should just look like running an MPI program on any cluster or on any SMP. If I google "Kerrighed MPI" I see a number of hits. Perhaps those hits might confirm or dispel my naive impression.

2) In this SMP machine there are 4 processors I could see. So how do I
use OpenMPI to run some programs on these CPUs?

Nothing special should be needed. E.g., "mpirun -np 4 ./a.out" should run ./a.out using four processes -- presumably one per processor.

Though I read how to
make a rank file but I am still feel confused. Could anyone show me a
simple rank file example for my Clusters?

A rank file may be unneeded for your case. With any luck, the underlying Kerrighed software will map one process to each processor, without any subsequent migration. So, things should "just work" and performance should be as expected. I suggest testing the first (does an MPI application "just work"?) before worrying about the second (do I have to do anything special to get better performance?)

OMPI rankfiles are for highly specialized tasks -- that is, for explicitly controlling which process runs on which core/s of which nodes. If I understand correctly, such control is not needed in your situation.

Reply via email to