Re: [OMPI users] Setting LD_LIBRARY_PATH for orted

2017-08-21 Thread Gilles Gouaillardet
Gary, one option (as mentioned in the error message) is to configure Open MPI with --enable-orterun-prefix-by-default. this will force the build process to use rpath, so you do not have to set LD_LIBRARY_PATH this is the easiest option, but cannot be used if you plan to relocate the Open

Re: [OMPI users] Setting LD_LIBRARY_PATH for orted

2017-08-21 Thread Reuti
Hi, > Am 21.08.2017 um 18:06 schrieb Jackson, Gary L. : > > > I’m using a binary distribution of OpenMPI 1.10.2. As linked, it requires > certain shared libraries outside of OpenMPI for orted itself to start. So, > passing in LD_LIBRARY_PATH with the “-x” flag to mpirun doesn’t do anything: >

[OMPI users] Setting LD_LIBRARY_PATH for orted

2017-08-21 Thread Jackson, Gary L.
I’m using a binary distribution of OpenMPI 1.10.2. As linked, it requires certain shared libraries outside of OpenMPI for orted itself to start. So, passing in LD_LIBRARY_PATH with the “-x” flag to mpirun doesn’t do anything: $ mpirun –hostfile ${HOSTFILE} -N 1 -n 2 -x LD_LIBRARY_PATH hostname

[OMPI users] MIMD execution with global "--map-by node"

2017-08-21 Thread Christoph Niethammer
Hello I try to place executables on different sockets on different nodes with Open MPI 2.1.1. Therefore I use something like the following command: mpirun --map-by ppr:1:node -np 1 numactl -N 0 /bin/hostname : -np 1 numactl -N 1 /bin/hostname But from the output I see, that all processes are p

[OMPI users] openmpi-2.1.2rc2: warnings from "make" and "make check"

2017-08-21 Thread Siegmar Gross
Hi, I've installed openmpi-2.1.2rc2 on my "SUSE Linux Enterprise Server 12.2 (x86_64)" with Sun C 5.15 (Oracle Developer Studio 12.6) and gcc-7.1.0. Perhaps somebody wants to eliminate the following warnings. openmpi-2.1.2rc2-Linux.x86_64.64_gcc/log.make.Linux.x86_64.64_gcc:openmpi-2.1.2rc2/omp

[OMPI users] Bottleneck of OpenMPI over 100Gbps ROCE

2017-08-21 Thread Lizhaogeng
Hi all, Sorry for resubmitting this problem because I found I didn't add the subject in the last email. I encountered a problem when I tested the performance of OpenMPI over ROCE 100Gbps. I have two servers connected with mellanox 100Gbps Connect-X4 ROCE NICs on them. I used intel mpi benchmark t