Alfio --
We just released Open MPI v2.0.0, with lots of MPI RMA fixes. Would you mind
testing there?
> On Jul 12, 2016, at 1:33 PM, Alfio Lazzaro wrote:
>
> Dear OpenMPI developers,
> we found a strange behavior when using MPI-RMA passive target and OpenMPI
> (versions 1.8.3 and 1.10.2). We
Thanks Jeff for your inputs.
‹
Murali
On 7/12/16, 12:12 PM, "devel on behalf of Jeff Squyres (jsquyres)"
wrote:
>Unfortunately, the Open MPI code base is quite large, and changes over
>time.
>
>There really is no overall diagram describing the entire code base,
>sorry. The OPAL-level dox
Unfortunately, the Open MPI code base is quite large, and changes over time.
There really is no overall diagram describing the entire code base, sorry. The
OPAL-level doxygen docs are probably the best you'll get, but they're really
only the utility classes in the portability layer. They don't
Thanks Kawashima. This note is really helpful.
‹
Murali
On 7/7/16, 4:17 PM, "devel on behalf of KAWASHIMA Takahiro"
wrote:
>FWIW, I have my private notes on process and datatype -related structs.
>
> https://rivis.github.io/doc/openmpi/openmpi-source-reading.en.xhtml
>
>They are created
Thanks Ralph.
The ‘doxygen’ command generated a bunch of html files along with few class
diagrams in “gif”. I think these figures cover only few classes/structs and
are not exhaustive. I am looking to generate a complete hierarchical diagram. I
will try to see if I can utilize the generated ht
Dear OpenMPI developers,
we found a strange behavior when using MPI-RMA passive target and OpenMPI
(versions 1.8.3 and 1.10.2). We don't see any problem when using MPICH.
This is a small example on what we want to do:
===
program rma_openmpi
use mpi
integer :: nproc, rank, ier
Hello Gilley, Howard,
I configured without disable dlopen - same error.
I test these classes on another cluster and: IT WORKS!
So it is a problem of the cluster configuration. Thank you all very much
for all your help! When the admin can solve the problem, i will let you
know, what he had cha
Instead of manually linking like this
gfortran -o a.out a.o -lmpi_usempi -lmpi_mpifh -lmpi
you can simply use the compiler wrapper
mpifort -o a.out a.o
so you do not have to worry about the Open MPI library names
/* i do not think mpifort was available back in 1.6.5, but mpif90 was */
Cheer
Sorry but I didn't understand the relation between name changes and wrapper
compilers. I only used --enable-static in the configure process.
> -rw-r--r-- 1 root root 1029580 Jul 11 23:51 libmpi_mpifh.a
> -rw-r--r-- 1 root root 17292 Jul 11 23:51 libmpi_usempi.a
>These are the two for v1.10.x.
S