Re: [OMPI users] know which CPU has the maximum value

2018-08-11 Thread Jeff Hammond
The MPI Forum email lists and GitHub are not secret. Please feel free to follow the GitHub project linked below and/or sign up for the MPI Forum email lists if you are interested in the evolution of the MPI standard. What MPI Forum members should avoid is creating FUD about MPI by speculating abo

Re: [OMPI users] cannot run openmpi 2.1

2018-08-11 Thread Kapetanakis Giannis
On 11/08/18 16:39, Ralph H Castain wrote: Put "oob=^usock” in your default mca param file, or add OMPI_MCA_oob=^usock to your environment Thank you very much, that did the trick. Could you please explain about this, cause I cannot find documentation G ___

Re: [OMPI users] cannot run openmpi 2.1

2018-08-11 Thread Ralph H Castain
Put "oob=^usock” in your default mca param file, or add OMPI_MCA_oob=^usock to your environment > On Aug 11, 2018, at 5:54 AM, Kapetanakis Giannis > wrote: > > Hi, > > I'm struggling to get 2.1.x to work with our HPC. > > Version 1.8.8 and 3.x works fine. > > In 2.1.3 and 2.1.4 I get errors

[OMPI users] cannot run openmpi 2.1

2018-08-11 Thread Kapetanakis Giannis
Hi, I'm struggling to get 2.1.x to work with our HPC. Version 1.8.8 and 3.x works fine. In 2.1.3 and 2.1.4 I get errors and segmentation faults. The builds are with infiniband and slurm support. mpirun locally works fine. Any help to debug this? [node39:20090] [[50526,1],2] usock_peer_recv_c

Re: [OMPI users] MPI group and stuck in communication

2018-08-11 Thread Jeff Squyres (jsquyres) via users
On Aug 10, 2018, at 6:27 PM, Diego Avesani wrote: > > The question is: > Is it possible to have a barrier for all CPUs despite they belong to > different group? > If the answer is yes I will go in more details. By "CPUs", I assume you mean "MPI processes", right? (i.e., not threads inside an