The MPI Forum email lists and GitHub are not secret. Please feel free to
follow the GitHub project linked below and/or sign up for the MPI Forum
email lists if you are interested in the evolution of the MPI standard.
What MPI Forum members should avoid is creating FUD about MPI by
speculating abo
On 11/08/18 16:39, Ralph H Castain wrote:
Put "oob=^usock” in your default mca param file, or add OMPI_MCA_oob=^usock to
your environment
Thank you very much, that did the trick.
Could you please explain about this, cause I cannot find documentation
G
___
Put "oob=^usock” in your default mca param file, or add OMPI_MCA_oob=^usock to
your environment
> On Aug 11, 2018, at 5:54 AM, Kapetanakis Giannis
> wrote:
>
> Hi,
>
> I'm struggling to get 2.1.x to work with our HPC.
>
> Version 1.8.8 and 3.x works fine.
>
> In 2.1.3 and 2.1.4 I get errors
Hi,
I'm struggling to get 2.1.x to work with our HPC.
Version 1.8.8 and 3.x works fine.
In 2.1.3 and 2.1.4 I get errors and segmentation faults. The builds are
with infiniband and slurm support.
mpirun locally works fine. Any help to debug this?
[node39:20090] [[50526,1],2] usock_peer_recv_c
On Aug 10, 2018, at 6:27 PM, Diego Avesani wrote:
>
> The question is:
> Is it possible to have a barrier for all CPUs despite they belong to
> different group?
> If the answer is yes I will go in more details.
By "CPUs", I assume you mean "MPI processes", right? (i.e., not threads inside
an