Hi Everyone,
 This is going to be a long email, so please bear with me. The example programs
 are obtained from lam-mpi.org site ...

 My ultimate goal is to get Open MPI working with openIB stack. First, I had
 installed lam-mpi , I know it doesn't have support for openIB but it's still
 relevant to some of my questions  I will ask.. Here is the set up I have:

 I have two machines, pe830-01 and pe830-02 .. Both have ethernet interface and
 HCA interface. The IP addresses follow:
                         eth0                 ib0
 pe830-01     10.12.4.32      192.168.1.32
 pe830-02     10.12.4.34      192.168.1.34

 When I downloaded, installed lam-mpi, things seemed to work just fine.. For
 example:
   $  cat /path/to/lamhostsfile
   192.168.1.34
   192.168.1.32

   $ lamboot -v -ssi boot rsh -ssi rsh_agent "ssh -x"  /path/to/lamhostsfile
   LAM 7.1.2/MPI 2 C++/ROMIO - Indiana University

   n-1<6456> ssi:boot:base:linear: booting n0 (192.168.1.34)
   n-1<6456> ssi:boot:base:linear: booting n1 (192.168.1.32)
   n-1<6456> ssi:boot:base:linear: finished

   $ lamnodes
   n0      pe830-02.domain.com:1:origin,this_node
   n1      192.168.1.32:1:

   $ /usr/local/lam/bin/mpirun C /path/to/hello_world
   Hello, world, I am 0 of 2 and this is on : pe830-02.
   Hello, world, I am 1 of 2 and this is on: pe830-01.

   $  /usr/local/lam/bin/mpirun C /path/to/broadcast
   Enter the vector length: 4
   i am: 0 , and i have 2 vector elements
   i am: 1 , and i have 2 vector elements
   [0] 4.000000
   [0] 4.000000
   [0] 4.000000
   [0] 4.000000

   So this has worked even though it lamhosts file is configured to use ib0
   interfaces. I further verified with tcpdump command that none of this went
   to eth0 ..

   Anyhow, if i change the lamhosts file to use the eth0 IPs, things work just
   as the same with no issues . And in that case i see some traffic on eth0
   with tcpdump.

   Now, when i installed and used Open MPI, things didn't work as
easy.. Here is
   what happens. After recompiling the sources with the mpicc that comes with
   open-mpi:

   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host 10.12.4.34,10.12.4.32
   /path/to/hello_world
   Hello, world, I am 0 of 2 and this is on : pe830-02.
   Hello, world, I am 1 of 2 and this is on: pe830-01.

   So far so good, using eth0 interfaces.. hello_world works just fine. Now,
   when i try the broadcast program:

   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host 10.12.4.34,10.12.4.32
   /path/to/broadcast

   It just hangs there, it doesn't prompt me the "Enter the vector length:"
   string . So i just enter a number anyway since i know the behavior of the
   program:

   10
   Enter the vector length: i am: 0 , and i have 5 vector elements
   i am: 1 , and i have 5 vector elements
   [0] 10.000000
   [0] 10.000000
   [0] 10.000000
   [0] 10.000000
   [0] 10.000000
   [0] 10.000000
   [0] 10.000000
   [0] 10.000000
   [0] 10.000000
   [0] 10.000000

   So, that's the first bump with the openmpi.. Now , if i try to use ib0
   interfaces instead of eth0 ones, i get:

   $  /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
   pls_rsh_agent ssh --mca btl openib -np 2 --host 192.168.1.34,192.168.1.32
   /path/to/hello_world
   --------------------------------------------------------------------------
   No available btl components were found!

   This means that there are no components of this type installed on your
   system or all the components reported that they could not be used.

   This is a fatal error; your MPI process is likely to abort.  Check the
   output of the "ompi_info" command and ensure that components of this
   type are available on your system.  You may also wish to check the
   value of the "component_path" MCA parameter and ensure that it has at
   least one directory that contains valid MCA components.

   --------------------------------------------------------------------------
   [pe830-01.domain.com:05942]

   I know, it thinks that it doesn't have openib components installed, however,
   ompi_info on both machines say otherwise:

   $ ompi_info | grep openib
   MCA mpool: openib (MCA v1.0, API v1.0, Component v1.0.2)
   MCA btl: openib (MCA v1.0, API v1.0, Component v1.0.2)

   Now the questions are...
   1- In the case of using lam/mpi over ib0 interfaces.. Does lam/mpi
   automatically just use IPoIB ?
   2 - Is there a tcpdump-like utility to dump the traffic on Infiniband HCAs?
   3 - In the case of Open MPI, does --mca btl arg option have to be passed
   everytime? For example,

   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host 10.12.4.34,10.12.4.32
   /path/to/hello_world

   works just fine, but the same command without the "--mca btl tcp" bit gives
   the:

   --------------------------------------------------------------------------
   It looks like MPI_INIT failed for some reason; your parallel process is
   likely to abort.  There are many reasons that a parallel process can
   fail during MPI_INIT; some of which are due to configuration or environment
   problems.  This failure appears to be an internal failure; here's some
   additional information (which may only be relevant to an Open MPI
   developer):

     PML add procs failed
     --> Returned value -2 instead of OMPI_SUCCESS
   --------------------------------------------------------------------------
   *** An error occurred in MPI_Init
   *** before MPI was initialized
   *** MPI_ERRORS_ARE_FATAL (goodbye)

   error ...

   4 - How come the behavior of broadcast.c was different on Open MPI
than it is
   on lam/mpi?

   5 - Any ideas as to why i am getting no btl component error when i want to
   use openib even though ompi_info shows it? If it help any further , I have
   the following openib modules :

   $ /sbin/lsmod | grep ib_
   ib_mthca              125141  0
   ib_ipoib               39493  0
   ib_uverbs              39145  0
   ib_umad                17009  0
   ib_ucm                 18373  0
   ib_sa                  13429  1 ib_ipoib
   ib_cm                  44581  1 ib_ucm
   ib_mad                 42345  4 ib_mthca,ib_umad,ib_sa,ib_cm
   ib_core                43073  8
   ib_mthca,ib_ipoib,ib_uverbs,ib_umad,ib_ucm,ib_sa,ib_cm,ib_mad

   Thanks in advance for all help.

   gurhan

   PS: In hello_world.c attachment hostnames are hardcoded for each box.
/* this file is from lam-mpi.org website */
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char *argv[]) {
	int rank, size, myn, i, N;
	double *vector, *myvec, sum, mysum, total;


	MPI_Init(&argc, &argv);

	MPI_Comm_rank(MPI_COMM_WORLD, &rank);
	MPI_Comm_size(MPI_COMM_WORLD, &size);

	/* the root process reads the vector size and stuff */
	if (rank == 0) {
		printf("Enter the vector length: ");
		scanf("%d", &N);
		vector = (double *)malloc(sizeof(double) * N );
		for ( i=0,sum=0; i<N; i++ ) {
			vector[i] = 1.0;
		}
		myn = N / size;
	}

	/* printf ("rank: %d, size: %d \n", rank, size); */
	/* broadcast the vector size that's local to each process */
	MPI_Bcast(&myn, 1, MPI_INT, 0, MPI_COMM_WORLD);
	/* allocate local vector size in each process */
	myvec = (double *)malloc(sizeof(double)*myn);
	/* Scatter the vector to all processes */
	MPI_Scatter(vector, myn, MPI_DOUBLE, myvec, myn, MPI_DOUBLE, 0, MPI_COMM_WORLD);

	printf("i am: %d , and i have %d vector elements\n", rank, myn);
	/* the sum of all elements of the local vector in each process */
	for ( i=0,mysum=0; i < myn; i++ ){
		mysum += myvec[i];
	}

	/* reduce all to one to get the global sum */
	MPI_Allreduce(&mysum, &total, 1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD);

	/* global sum * local vector */
	for (i=0; i<myn; i++) {
		myvec[i] *= total;
	}

	/* Gather the local vector in the root proc. */
	MPI_Gather(myvec, myn, MPI_DOUBLE, vector, myn, MPI_DOUBLE, 0, MPI_COMM_WORLD);

	if ( rank == 0 ) {
		for (i=0;i<N;i++) {
			printf("[%d] %f\n", rank, vector[i]);
		}
	}

	MPI_Finalize();
	return 0;
}		

#include <stdio.h>
#include <mpi.h>

int main(int argc, char *argv[]) {
	int rank, size, rc;
        char host[] = "pe830-02";


	rc = MPI_Init(&argc, &argv);
	if ( rc != MPI_SUCCESS ) {
		printf("an error occurred in mpi_init\n");
	}
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	MPI_Comm_rank(MPI_COMM_WORLD, &rank);
	printf("Hello, world, I am %d of %d and this is on : %s.\n", rank, size, host);
	MPI_Finalize();
	return 0;
}

Reply via email to