Re: [OMPI users] Invalid results with OpenMPI on Ubuntu Artful because of --enable-heterogeneous

2017-11-16 Thread Xavier Besseron
Thanks for looking at it! Apparently, someone requested support for heterogeneous machines long time ago: https://bugs.launchpad.net/ubuntu/+source/openmpi/+bug/419074 Xavier On Mon, Nov 13, 2017 at 7:56 PM, Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > Xavier, > > i confirm

[OMPI users] --map-by

2017-11-16 Thread Noam Bernstein
Hi all - I’m trying to run mixed MPI/OpenMP, so I ideally want binding of each MPI process to a small set of cores (to allow for the OpenMP threads). From the mpirun docs at https://www.open-mpi.org//doc/current/man1/mpirun.1.php I got

Re: [OMPI users] --map-by

2017-11-16 Thread r...@open-mpi.org
Do not include the “bind-to core” option.the mapping directive already forces that Sent from my iPad > On Nov 16, 2017, at 7:44 AM, Noam Bernstein > wrote: > > Hi all - I’m trying to run mixed MPI/OpenMP, so I ideally want binding of > each MPI process to a small set of cores (to allow for

Re: [OMPI users] --map-by

2017-11-16 Thread Noam Bernstein
> On Nov 16, 2017, at 9:49 AM, r...@open-mpi.org wrote: > > Do not include the “bind-to core” option.the mapping directive already forces > that Same error message, unfortunately. And no, I’m not setting a global binding policy, as far as I can tell: env | grep OMPI_MCA OMPI_MCA_hwloc_base_r

[OMPI users] OMPI 2.1.2 and SLURM compatibility

2017-11-16 Thread Bennet Fauber
I think that OpenMPI is supposed to support SLURM integration such that srun ./hello-mpi should work? I built OMPI 2.1.2 with export CONFIGURE_FLAGS='--disable-dlopen --enable-shared' export COMPILERS='CC=gcc CXX=g++ FC=gfortran F77=gfortran' CMD="./configure \ --prefix=${PREFIX} \

Re: [OMPI users] OMPI 2.1.2 and SLURM compatibility

2017-11-16 Thread Charles A Taylor
Hi Bennet, Three things... 1. OpenMPI 2.x requires PMIx in lieu of pmi1/pmi2. 2. You will need slurm 16.05 or greater built with —with-pmix 2a. You will need pmix 1.1.5 which you can get from github. (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_pmix_tarballs&d=DwIFaQ&c=pZJP

Re: [OMPI users] OMPI 2.1.2 and SLURM compatibility

2017-11-16 Thread Bennet Fauber
Charlie, Thanks a ton! Yes, we are missing two of the three steps. Will report back after we get pmix installed and after we rebuild Slurm. We do have a new enough version of it, at least, so we might have missed the target, but we did at least hit the barn. ;-) On Thu, Nov 16, 2017 at 10:5

Re: [OMPI users] OMPI 2.1.2 and SLURM compatibility

2017-11-16 Thread r...@open-mpi.org
What Charles said was true but not quite complete. We still support the older PMI libraries but you likely have to point us to wherever slurm put them. However,we definitely recommend using PMIx as you will get a faster launch Sent from my iPad > On Nov 16, 2017, at 9:11 AM, Bennet Fauber wro

[OMPI users] Forcing MPI processes to end

2017-11-16 Thread Adam Sylvester
I'm using Open MPI 2.1.0 for this but I'm not sure if this is more of an Open MPI-specific implementation question or what the MPI standard guarantees. I have an application which runs across multiple ranks, eventually reaching an MPI_Gather() call. Along the way, if one of the ranks encounters a

Re: [OMPI users] Forcing MPI processes to end

2017-11-16 Thread Aurelien Bouteiller
Adam. Your MPI program is incorrect. You need to replace the finalize on the process that found the error with MPIAbort On Nov 16, 2017 10:38, "Adam Sylvester" wrote: > I'm using Open MPI 2.1.0 for this but I'm not sure if this is more of an > Open MPI-specific implementation question or what th