Gilles:
Regarding http://www.open-mpi.org/community/lists/users/2015/11/27999.php,
I am looking for a package with RMA bug fixes from last summer. I will
start with the git HEAD and work backwards.
Dave:
Regarding http://www.open-mpi.org/community/lists/users/2015/11/27981.php...
The ARMCI-MPI
Abe-san,
I am glad you were able to move forward.
btw, George has a Ph.D, but Sheldon Cooper would say about me I am only an
engineer
Cheers,
Gilles
On Saturday, November 7, 2015, ABE Hiroshi wrote:
> Dear Dr. Bosilca and All,
>
> Regarding my problem, MPI_Wait stall after MPI_Isend with lar
Dear Dr. Bosilca and All,
Regarding my problem, MPI_Wait stall after MPI_Isend with large (over 4kbytes)
messages has been resolved by Dr. Gouaillardet’s suggestion :
1 MPI_Isend in the master thread
2 Launch worker threads to receive the messages by MPI_Recv
3. MPI_Waitall in the master thread.
> From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden])
> Date: 2015-11-06 18:02:42
>
> Both of these seem to be issues with libnl, which is a dependent library
> that Open MPI uses.
Based on your email, I found this message and thread:
https://www.open-mpi.org/community/lists/devel/2015/08/17812.p
Both of these seem to be issues with libnl, which is a dependent library that
Open MPI uses.
Can you send all the information listed here:
http://www.open-mpi.org/community/help/
> On Nov 6, 2015, at 5:44 PM, Saurabh T wrote:
>
> Hi,
>
> On Redhat Enterprise Linux 7, I am facing the fol
Hi,
On Redhat Enterprise Linux 7, I am facing the following problems.
1. With OpenMPI 1.8.8, everything builds, but the following error appears on
running:
orterun -np 2 hello_cxx
hello_cxx: route/tc.c:973: rtnl_tc_register: Assertion `0' failed.
hello_cxx: route/tc.c:973: rtnl_tc_register: Asse
Harald,
non blocking collectives were introduced in the v1.8 series
I will review all libnbc module, and other modules as well.
Jeff can/will explain why fortran bindings should be wrapped in fortran and
not in C.
fwiw, I understand in some cases it can be convenient to have fortran
bindings inv
Hello Gilles,
some comments interspersed
On 11/06/2015 02:50 PM, Gilles Gouaillardet wrote:
Harald,
the answer is in ompi/mca/coll/libnbc/nbc_ibcast.c
this has been revamped (but not 100%) in v2.x
(e.g. no more calls to MPI_Comm_{size,rank} but MPI_Type_size is still
being invoked)
Ah! it
Harald,
the answer is in ompi/mca/coll/libnbc/nbc_ibcast.c
this has been revamped (but not 100%) in v2.x
(e.g. no more calls to MPI_Comm_{size,rank} but MPI_Type_size is still
being invoked)
I will review this.
basically, no MPI_* should be invoked internally (e.g. we should use the
internal omp
Dear all,
we develop an instrumentation package based on PMPI and we've
observed that PMPI_Ibarrier and PMPI_Ibcast invoke regular MPI_Comm_size
and MPI_Comm_rank instead to the PMPI symbols (i.e. PMPI_Comm_size and
PMPI_Comm_rank) in OpenMPI 1.10.0.
I have attached simple example that d
10 matches
Mail list logo