Gary,
one option (as mentioned in the error message) is to configure Open MPI
with --enable-orterun-prefix-by-default.
this will force the build process to use rpath, so you do not have to
set LD_LIBRARY_PATH
this is the easiest option, but cannot be used if you plan to relocate
the Open
Hi,
> Am 21.08.2017 um 18:06 schrieb Jackson, Gary L. :
>
>
> I’m using a binary distribution of OpenMPI 1.10.2. As linked, it requires
> certain shared libraries outside of OpenMPI for orted itself to start. So,
> passing in LD_LIBRARY_PATH with the “-x” flag to mpirun doesn’t do anything:
>
I’m using a binary distribution of OpenMPI 1.10.2. As linked, it requires
certain shared libraries outside of OpenMPI for orted itself to start. So,
passing in LD_LIBRARY_PATH with the “-x” flag to mpirun doesn’t do anything:
$ mpirun –hostfile ${HOSTFILE} -N 1 -n 2 -x LD_LIBRARY_PATH hostname
Hello
I try to place executables on different sockets on different nodes with Open
MPI 2.1.1.
Therefore I use something like the following command:
mpirun --map-by ppr:1:node -np 1 numactl -N 0 /bin/hostname : -np 1 numactl
-N 1 /bin/hostname
But from the output I see, that all processes are p
Hi,
I've installed openmpi-2.1.2rc2 on my "SUSE Linux Enterprise Server 12.2
(x86_64)" with Sun C 5.15 (Oracle Developer Studio 12.6) and gcc-7.1.0.
Perhaps somebody wants to eliminate the following warnings.
openmpi-2.1.2rc2-Linux.x86_64.64_gcc/log.make.Linux.x86_64.64_gcc:openmpi-2.1.2rc2/omp
Hi all,
Sorry for resubmitting this problem because I found I didn't add the
subject in the last email.
I encountered a problem when I tested the performance of OpenMPI over ROCE
100Gbps.
I have two servers connected with mellanox 100Gbps Connect-X4 ROCE NICs on
them.
I used intel mpi benchmark t