Re: [OMPI users] Openmpi, LSF and GM

2006-07-18 Thread Jeff Squyres (jsquyres)
> -Original Message- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf Of Keith Refson > Sent: Tuesday, July 18, 2006 6:21 AM > To: Open MPI Users > Subject: Re: [OMPI users] Openmpi, LSF and GM > > > > The argu

Re: [OMPI users] Openmpi, LSF and GM

2006-07-18 Thread Keith Refson
Dear Brian, Thanks for the help Brian Barrett wrote: > > The arguments you want would look like: > > > >mpirun -np X -mca btl gm,sm,self -mca btl_base_verbose 1 -mca > > btl_gm_debug 1 Aha. I think I had misunderstood the syntax slightly, which explains why I previously saw no debugging

Re: [OMPI users] Openmpi, LSF and GM

2006-07-16 Thread Brian Barrett
On Jul 16, 2006, at 6:12 AM, Keith Refson wrote: The compile of openmpi 1.1 was without problems and appears to have correctly built the GM btl. $ ompi_info -a | egrep "\bgm\b|_gm_" MCA mpool: gm (MCA v1.0, API v1.0, Component v1.1) MCA btl: gm (MCA v1.0, API v1.0

[OMPI users] Openmpi, LSF and GM

2006-07-16 Thread Keith Refson
I'm trying out openmpi for the first time on a cluster of dual AMD Opterons with Myrinet interconnect using GM. There are two outstanding but possibly connected problems, (a) how to interact correctly with the LSF job manager and (2) how to use the gm interconnect. The compile of openmpi 1.1 wa