Re: [OMPI users] Cast MPI inside another MPI?

2016-11-25 Thread George Bosilca
Diego, MPI+MPI is a well known parallel programming paradigm.Why are you trying to avoid MPI + OpenMP ? Open MPI is a fully 3.1-compatible implementation of the MPI standard, and as such it implements all API described in the version 3.1 of the MPI standard (http://mpi-forum.org/docs/). Otherwise

Re: [OMPI users] Segmentation fault (invalid mem ref) at MPI_StartAll (second call)

2016-11-25 Thread George Bosilca
At the first glance I would say you are confusing the variables counting your requests, reqcount and nrequests. George. On Fri, Nov 25, 2016 at 7:11 AM, Paolo Pezzutto wrote: > Dear all, > > I am struggling with an invalid memory reference when calling SUB EXC_MPI > (MOD01), and precisely at

Re: [OMPI users] malloc related crash inside openmpi

2016-11-25 Thread Noam Bernstein
> On Nov 24, 2016, at 10:52 AM, r...@open-mpi.org wrote: > > Just to be clear: are you saying that mpirun exits with that message? Or is > your application process exiting with it? > > There is no reason for mpirun to be looking for that library. > > The library in question is in the /lib/openm

[OMPI users] Segmentation fault (invalid mem ref) at MPI_StartAll (second call)

2016-11-25 Thread Paolo Pezzutto
Dear all, I am struggling with an invalid memory reference when calling SUB EXC_MPI (MOD01), and precisely at MPI_StartAll (see comment) below. @@ ! ** file mod01.f90 ! MODULE MOD01 implicit none include 'mpif.h' ! alternat

[OMPI users] Cast MPI inside another MPI?

2016-11-25 Thread Diego Avesani
Dear all, I have the following question. Is it possible to cast an MPI inside another MPI? I would like to have to level of parallelization, but I would like to avoid the MPI-openMP paradigm. Another question. I normally use openMPI but I would like to read something to understand and learn all i

Re: [OMPI users] MPI_Sendrecv datatype memory bug ?

2016-11-25 Thread Gilles Gouaillardet
Yann, Please post the test case that evidences the issue. What is the minimal config required to reproduce it (E.g. number of nodes and tasks per node) If more than one node, which interconnect are you using ? Out of curiosity, what if you mpirun --mca mpi_leave_pinned 0 ... or mpirun --mca btl