> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Keith Refson
> Sent: Tuesday, July 18, 2006 6:21 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Openmpi, LSF and GM
>
> > > The argu
Dear Brian,
Thanks for the help
Brian Barrett wrote:
> > The arguments you want would look like:
> >
> >mpirun -np X -mca btl gm,sm,self -mca btl_base_verbose 1 -mca
> > btl_gm_debug 1
Aha. I think I had misunderstood the syntax slightly, which explains why
I previously saw no debugging
On Jul 16, 2006, at 6:12 AM, Keith Refson wrote:
The compile of openmpi 1.1 was without problems and
appears to have correctly built the GM btl.
$ ompi_info -a | egrep "\bgm\b|_gm_"
MCA mpool: gm (MCA v1.0, API v1.0, Component v1.1)
MCA btl: gm (MCA v1.0, API v1.0
I'm trying out openmpi for the first time on
a cluster of dual AMD Opterons with Myrinet
interconnect using GM. There are two outstanding
but possibly connected problems, (a) how to interact
correctly with the LSF job manager and (2) how to
use the gm interconnect.
The compile of openmpi 1.1 wa