Re: [OMPI users] new hwloc error

2015-04-28 Thread Brice Goglin
Hello, Can you build hwloc and run lstopo on these nodes to check that everything looks similar? You have hyperthreading enabled on all nodes, and you're trying to bind processes to entire cores, right? Does 0,16 correspond to two hyperthreads within a single core on these nodes? (lstopo -p should

Re: [OMPI users] OpenMPI 1.8.2 problems on CentOS 6.3

2015-04-28 Thread Ralph Castain
Here is what I see in my 1.8.5 build lib directory: lrwxrwxrwx. 1 rhc 15 Apr 28 07:51 libmpi.so -> libmpi.so.1.6.0* lrwxrwxrwx. 1 rhc 15 Apr 28 07:51 libmpi.so.1 -> libmpi.so.1.6.0* -rwxr-xr-x. 1 rhc 1015923 Apr 28 07:51 libmpi.so.1.6.0* So it should just be a link > On Apr 28, 2015

Re: [OMPI users] OpenMPI 1.8.2 problems on CentOS 6.3

2015-04-28 Thread Lane, William
Ralph, I copied the LAPACK benchmark binaries (xhpl being the binary) over to a development system (which is running the same version of CentOS) but I'm getting some errors trying to run the OpenMPI LAPACK benchmark on OpenMPI 1.8.5: xhpl: error while loading shared libraries: libmpi.so.1: cann

Re: [OMPI users] performance issue mpi_init

2015-04-28 Thread Ralph Castain
Here is what I get on my CentOS7 system using the 1.8.5 about to be released: When built as a debug build: 07:41:34 (v1.8) /home/common/openmpi/ompi-release/orte/test/mpi$ time mpirun -host bend001 -n 2 ./mpi_no_op real0m0.120s user0m0.064s sys 0m0.090s 07:42:05 (v1.8) /home/commo

[OMPI users] new hwloc error

2015-04-28 Thread Noam Bernstein
Hi all - we’re having a new error, despite the fact that as far as I can tell we haven’t changed anything recently, and I was wondering if anyone had any ideas as to what might be going on. The symptom is that we sometimes get an error when starting a new mpi job: Open MPI tried to bind a new p

Re: [OMPI users] Configure failure

2015-04-28 Thread Jeff Squyres (jsquyres)
On Apr 27, 2015, at 5:02 PM, Walt Brainerd wrote: > > CC constants.lo > In file included from ../../../../opal/include/opal_config_bottom.h:256:0, > from ../../../../opal/include/opal_config.h:2797, > from ../../../../ompi/include/ompi_config.h:27, >

[OMPI users] performance issue mpi_init

2015-04-28 Thread Steven Vancoillie
Dear OpenMPI developers, I've run into a recurring problem that was addressed before on this list, of which subject was "Performance issue of mpirun/mpi_init". I found the original thread here: http://comments.gmane.org/gmane.comp.clustering.open-mpi.user/21346 My former colleague noted that with

Re: [OMPI users] MPI_THREAD_MULTIPLE and openib btl

2015-04-28 Thread Mike Dubman
-mca pml_base_verbose 10 you should see: select: component yalla selected for mxm debug info, please add: -x LD_PRELOAD=$MXM_DIR/lib/libmxm-debug.so -x MXM_LOG_LEVEL=debug On Tue, Apr 28, 2015 at 7:54 AM, Subhra Mazumdar wrote: > Is there any way (probe or trace or other) to sanity check th

Re: [OMPI users] MPI_THREAD_MULTIPLE and openib btl

2015-04-28 Thread Subhra Mazumdar
Is there any way (probe or trace or other) to sanity check that I am indeed using #2 ? Subhra On Fri, Apr 24, 2015 at 12:55 AM, Mike Dubman wrote: > yes > > #1 - ob1 as pml, openib openib as btl (default: rc) > #2 - yalla as pml, mxm as IB library (default: ud, use "-x > MXM_TLS=rc,self,shm" fo