If I understand you correctly, it sounds like MPI -- overall -- is new to you.

If that's the case, here's the 2 minute overview: MPI is communications 
middleware, typically used for parallel applications.  MPI, as an API, is 
underlying-network-agnostic; hence, it can be used with TCP sockets, ethernet, 
OpenFabrics-based networks, etc., without the upper-layer application being 
aware of the differences between these networks.  

There's lots of MPI-based applications out there, including bunches of 
benchmarks and tests.  You might want to get the Intel MPI Benchmarks (aka 
"IMB") and compile and run those with Open MPI over your modified OFED stack. 
The IMB are sufficiently complex, and MPI implementations themselves are 
sufficiently complex and different from each other that running IMB with 4 or 8 
processes will exercise your OFED stack in many different ways; that's probably 
why MPI was recommended to you.

The two big open source MPI implementations -- Open MPI and MPICH2 -- both come 
with "wrapper" compilers (mpicc, mpic++, mpf77, mpif90, ...etc.) that add all 
the relevant compiler/linker flags to the command line to compile/link your 
application.  Hence, in Makefiles, you can typically remove all MPI-inspired 
references to -I, -L, and -l and just use the wrapper compilers.  For example:

mpicc -c foo.c
mpicc -c bar.c
mpicc foo.o bar.o -o my_mpi_application

You then use "mpirun" to launch your application in parallel.  For example:

mpirun -np 8 --hostfile my_hostfile my_mpi_application

See the mpirun(1) for more details, and the FAQ.  Each MPI implementation's 
mpirun is typically different than the others (e.g., Open MPI's mpirun has 
different CLI options than MPICH2's mpirun).

Open MPI also allows the concept of run-time customization of the underlying 
MPI processing engine via "MCA" parameters.  You can pass MCA params via the 
command line, environment, or files (see the FAQ).  Open MPI should probably 
pick the OpenFabrics-based transport by default on your machines, but just to 
be sure, you can force the use of the "openib" BTL (byte transport layer) in 
Open MPI thusly:

mpirun -np 8 --hostfile my_hostfile --mca btl openib,sm,self my_mpi_application

openib = OFED-based transport (for MPI procs on remote servers)
sm = shared memory-based transport (for MPI procs on the same server)
self = process loopback

That should be enough to get you going; good luck.



On Sep 2, 2011, at 7:17 AM, bhimesh akula wrote:

> Hi ,
> 
> We developed new OFED stack as for our requirements to our new product.Now it 
> is needed to check the functionality of new OFED stack using MPI,used multi 
> node setup to check this stack.But problem is we are not having no idea how 
> to use OPEN-MPI tool to check our stack.I went through this site 
> "http://www.open-mpi.org/",here only mentioned how to run MPI applications 
> but we need our new stack has to be checked using MPI.
> 
> As we checked our new stack using qperf tool but MPI is more recommended 
> .Want to know how to run MPI as how we used qperf. we used qperf like "at one 
> node running qperf as server and running qperf at other node as client,ran 
> all the qperf test cases from client to see the functionality and performance 
> of OFED".Like this how we can use OPEN-MPI tool to test the new stack.
> 
> I think problem conveyed well,please get back to me on this as soon as 
> possible.
> 
> Thanks & regards,
> Punya Bhimesh.
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to