[OMPI users] Running MPI apps (example apps in OpenMPI) on OFED stack without IPoIB

2011-10-13 Thread ramu
Hi,
I am trying to run various MPI apps (for e.g., example apps in OpenMPI, IMB etc)
on my OFED setup (two hosts with CentOS5.4 connected back to back using Mellanox
infiniband hardware).  I want to run these MPI apps without IPoIB i.e, using
infiniband verbs.  
Below is the command I have tried.
"mpirun --prefix /usr/local/ -np 2 --mca btl openib,self --mca
btl_openib_cpc_include rdmacm --mca btl_openib_if_include "mthca_0:1" -hostfile
tmp_host_file ring_c"
With this I am getting error as
"At least one pair of MPI processes are unable to reach each other for
MPI communications. This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other. This error can sometimes be the result of forgetting to
specify the "self" BTL."

I am using OpenMPI version 1.4.3

Is it possible to avoid IPoIB while running MPI app on top of OFED stack ? If
yes, what is that I am missing in the above command ? Pleas suggest me.



[OMPI users] running osu mpi benchmark tests on Infiniband setup

2011-10-19 Thread ramu
Hi, 
I am trying to run osu mpi benchmark tests on Infiniband setup (connected
back-to-back via Mellanox hw).  I am using the below command
"mpirun --prefix /usr/local/ -np 2 --mca btl openib,self -H 192.168.4.91 -H
192.168.4.92 --mca orte_base_help_aggregate 0 --mca btl_openib_cpc_include oob
/root/osu_benchmarks-3.1.1/osu_latency
"
But I am getting the error as
"[Isengard:05030] *** An error occurred in MPI_Barrier
[Isengard:05030] *** on communicator MPI_COMM_WORLD
[Isengard:05030] *** MPI_ERR_IN_STATUS: error code in status
[Isengard:05030] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[Rohan:05010] *** An error occurred in MPI_Barrier
[Rohan:05010] *** on communicator MPI_COMM_WORLD
[Rohan:05010] *** MPI_ERR_IN_STATUS: error code in status
[Rohan:05010] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
"

Am I missing anything in the above command ? Please suggest me.

Regards,
Ramu



[OMPI users] Technical details of various MPI APIs

2011-10-21 Thread ramu
Hi,
I am trying to explore more on technical details of MPI APIs defined in OpenMPI
(for e.g., MPI_Init(), MPI_Barrier(), MPI_Send(), MPI_Recv(), MPI_Waitall(),
MPI_Finalize() etc) when the MPI Processes are running on Infiniband cluster
(OFED).  I mean, what are the messages exchanged between MPI processes over IB,
how does processes identify each other and what messages they exchange to
identify and what all is needed to trigger data traffic.  Is there any doc/link
available which describes these details.  Please suggest me. 

Thanks & Regards,
Ramu 



[OMPI users] Regarding OpenMPI

2011-06-01 Thread Bhargava Ramu Kavati
Hi,
I am trying to run MPI applications using OpenMPI in OFED Cluster (over
IB).  I am trying to find whether OpenMPI supports a transport interface
which is based on libibverbs layer in OFED (I mean, which does not use
connection manager in OFED) ?

Thanks you in advance.

Thanks & Regards,
Ramu


Re: [OMPI users] Regarding OpenMPI

2011-06-01 Thread Bhargava Ramu Kavati
Hi Jeff,
Thank you for your quick response.
I have another query, whether OpenMPI depends on Subnet manager/Subnet
administration components of OFED ? (I mean, does OpenMPI require any
services from Subnet manager/Subnet administration components in OFED
without which it cannot run?)

Thanks & Regards,
Ramu

On Wed, Jun 1, 2011 at 6:28 PM, Jeff Squyres  wrote:

> On Jun 1, 2011, at 8:49 AM, Bhargava Ramu Kavati wrote:
>
> > I am trying to run MPI applications using OpenMPI in OFED Cluster (over
> IB).  I am trying to find whether OpenMPI supports a transport interface
> which is based on libibverbs layer in OFED (I mean, which does not use
> connection manager in OFED) ?
>
> You're asking two different questions.
>
> 1. Does Open MPI use the native verbs interface in OFED?
>
> Yes.
>
> 2. Does Open MPI use one of the OFED connection managers?
>
> For IB, Open MPI can use the RDMA connection manager, but it does not have
> to.  It defaults to not using it (instead, it exchanges IB/verbs connection
> data via TCP sockets).
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>