Brian,

      I'm using OpenMPI 1.2.6 (r17946).   Could you plese check which
version works ?  Thanks a lot,
Mi


                                                                       
             "Brian W.                                                 
             Barrett"                                                  
             <brbarret@open-mp                                          To
             i.org>                    Open MPI Users <us...@open-mpi.org>
             Sent by:                                                   cc
             users-bounces@ope         Greg                            
             n-mpi.org                 Rodgers/Poughkeepsie/IBM@IBMUS, 
                                       Brad Benton/Austin/IBM@IBMUS    
                                                                   Subject
             08/25/2008 01:44          Re: [OMPI users] RDMA over IB   
             PM                        between heterogenous processors 
                                       with different endianness       
                                                                       
             Please respond to                                         
              Open MPI Users                                           
             <users@open-mpi.o                                         
                    rg>                                                
                                                                       
                                                                       




On Mon, 25 Aug 2008, Mi Yan wrote:

> Does OpenMPI always use SEND/RECV protocol between heterogeneous
> processors with different endianness?
>
> I tried btl_openib_flags to be 2 , 4 and 6 respectively to allowe RDMA,
> but the bandwidth between the two heterogeneous nodes is slow, same as
> the bandwidth when btl_openib_flags to be 1. Seems to me SEND/RECV is
> always used no matter btl_openib_flags is. Can I force OpenMPI to use
> RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not need the
> support for endianness.

Which version of Open MPI are you using?  In recent versions (I don't
remember exactly when the change occured, unfortuantely), the decision
between send/recv and rdma was moved from being solely based on the
architecture of the remote process to being based on the architecture and
datatype.  It's possible this has been broken again, but there defintiely
was some window (possibly only on the development trunk) when that worked
correctly.

Brian
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to