Dear All,
I would appreciate some general advice on how to efficiently implement
the following scenario.
I am looking into how to send a large amount of data over IB _once_, to
multiple receivers. The trick is, of course, that while the ping-pong
benchmark delivers great bandwidth, it does s
If you have multiple receivers then use MPI_Bcast, it does all the
necessary optimizations such that MPI users do not have to struggle to
adapt/optimize their application for a specific architecture/network.
George.
On Fri, May 26, 2017 at 6:43 AM, marcin.krotkiewski <
marcin.krotkiew...@gmai
Hi,
I have built openmpi 2.1.1 with hpcx-1.8 and tried to run some mpi code under
ubuntu 14.04 and LXC (1.x) but I get the following:
[ib7-bc2oo42-be10p16.science.gc.ca:16035] PMIX ERROR: OUT-OF-RESOURCE in file
src/dstore/pmix_esh.c at line 1651
[ib7-bc2oo42-be10p16.science.gc.ca:16035] PMIX E
Hi John,
In the 2.1.x release stream a shared memory capability was introduced into
the PMIx component.
I know nothing about LXC containers, but it looks to me like there's some
issue when PMIx tries
to create these shared memory segments. I'd check to see if there's
something about your
contain
You can also get around it by configuring OMPI with “--disable-pmix-dstore”
> On May 26, 2017, at 3:02 PM, Howard Pritchard wrote:
>
> Hi John,
>
> In the 2.1.x release stream a shared memory capability was introduced into
> the PMIx component.
>
> I know nothing about LXC containers, but it
I have been having some issues with using openmpi with tcp over IPoIB
and openib. The problems arise when I run a program that uses basic
collective communication. The two programs that I have been using are
attached.
*** IPoIB ***
The mpirun command I am using to run mpi over IPoIB is,
mpiru