Dear users,
I am totally stuck using openmpi. I have two versions on my machine:
1.8.1 and 2.0.0, and none of them work. When use the mpirun *1.8.1
version*, I get the following error:
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal:
The problem has been solved with the latest snapshot.Thanks a lot for your
help.
Thanking You
Debendra
On Wed, Aug 17, 2016 at 11:34 PM, Jeff Squyres (jsquyres) <
jsquy...@cisco.com> wrote:
> Debendra -
>
> A fix has been submitted for the v2.0.1 release. Could you give it a try
> with the late
The rdma error sounds like something isn’t right with your machine’s Infiniband
installation.
The cross-version problem sounds like you installed both OMPI versions into the
same location - did you do that?? If so, then that might be the root cause of
both problems. You need to install them in
Oh great Open MPI Gurus,
I'm slowly trying to learn and transition to 'use mpi_f08'. So, I'm writing
various things and I noticed that this triggers an error:
program hello_world
use mpi_f08
implicit none
type(MPI_Comm) :: comm = MPI_COMM_NULL
end program hello_world
when compiled (Open
On Aug 19, 2016, at 2:30 PM, Matt Thompson wrote:
>
> I'm slowly trying to learn and transition to 'use mpi_f08'. So, I'm writing
> various things and I noticed that this triggers an error:
>
> program hello_world
>use mpi_f08
>implicit none
>type(MPI_Comm) :: comm = MPI_COMM_NULL
>
Hi Devendar,
Thank you for your answer.
Setting MXM_TLS=rc,shm,self does improve the speed of MXM (both latency and
bandwidth):
without MXM_TLS
comm lat_min bw_max bw_max
pingpong pingpongsendrecv
(us) (MB/s) (MB/s)
--
On Fri, Aug 19, 2016 at 2:55 PM, Jeff Squyres (jsquyres) wrote:
> On Aug 19, 2016, at 2:30 PM, Matt Thompson wrote:
> >
> > I'm slowly trying to learn and transition to 'use mpi_f08'. So, I'm
> writing various things and I noticed that this triggers an error:
> >
> > program hello_world
> >u
Hi Martin
MXM default transport is UD (MXM_TLS=*ud*,shm,self), which is scalable when
running with large applications. RC(MXM_TLS=*rc,*shm,self) is recommended
for microbenchmarks and very small scale applications,
yes, max seg size setting is too small.
Did you check any message rate benchmar
On Aug 19, 2016, at 6:32 PM, Matt Thompson wrote:
>
> 2. The second one is a run-time assignment. You can do that between any
> compatible entities, and so that works.
>
> Okay. This makes sense. I guess I was surprised that MPI_COMM_NULL wasn't a
> constant (or parameter, I guess). But maybe