There is already a nice solution for the useful special case of ABI
portability where one wants to use more than one MPI library with an
application binary, but only one MPI library for a given application
invocation:
https://github.com/cea-hpc/wi4mpi
They document support for the Intel MPI and O
Yes, I can confirm that openmpi 3.0.0 builds without issue when
libnl-route-3-dev is installed.
Thanks,
Stephen
Stephen Guzik, Ph.D.
Assistant Professor, Department of Mechanical Engineering
Colorado State University
On 09/21/2017 12:55 AM, Gilles Gouaillardet wrote:
> Stephen,
>
>
> a simpler o
Hi,
I am currently trying to learn about fault tolerance in MPI so I
experimented a bit with what happens if I kill various components in my MPI
setup, but there are some unexpected hangs in some situations.
I use the following MPI script:
#!/usr/bin/env python
from mpi4py import MPI
Great it is finally working !
nvml and opencl are only used by hwloc, and i do not think Open MPI is
using these features,
so i suggest you go ahead, reconfigure and rebuild Open MPI and see how
things go
Cheers,
Gilles
On 9/22/2017 4:59 PM, Tim Jim wrote:
Hi Gilles,
Yes, you're rig
Hi Gilles,
Yes, you're right. I wanted to double check the compile but didn't notice I
was pointing to the exec I compiled from a previous Make.
mpicc now seems to work, running mpirun hello_c gives:
Hello, world, I am 0 of 4, (Open MPI v3.0.0, package: Open MPI
tjim@DESKTOP-TA3P0PS Distribution
Was there an error in the copy/paste ?
The mpicc command should be
mpicc /opt/openmpi/openmpi-3.0.0_src/examples/hello_c.c
Cheers,
Gilles
On Fri, Sep 22, 2017 at 3:33 PM, Tim Jim wrote:
> Thanks for the thoughts and comments. Here is the setup information:
> OpenMPI Ver. 3.0.0. Please see at