Hi Nathan,
surely, OpenMPI was compiled with the Valgrind support:
%/opt/mpi/openmpi-1.8.4.dbg/bin/ompi_info | grep -i memchecker
MCA memchecker: valgrind (MCA v2.0, API v2.0, Component v1.8.4)
The following configure options were used:
--enable-mem-debug --enable-debug --enable-memch
Hi,
Our parallel applications behaves strange when it is compiled with Openmpi
v1.8.4 on both Linux and Mac OS X platforms. The Valgrind reports memory
problems in OpenMPI rather than in our code:
=4440== Invalid read of size 1
==4440==at 0xCAD6D37: ompi_osc_rdma_callback (osc_rdma_data_m
Dear Developers,
I would like to clarify a question about the OpenMPI license. We are working
on academic code and our project is non-profitable. Now we are planning to
sale the parallel binaries. The question is whether it is allowed to compile
our project with OpenMPI (v1.8.2) and then dist
Hi,
I just will confirm that the issue has been fixed. Specifically, with the
latest OpenMPI v1.8.1a1r31402 we need now 2.5 hrs to complete verification and
that timing is even slightly better compared to v1.6.5 (3hrs).
Thank you very much for your assistance!
With best regards,
Victor.
>I
Hi again,
> Okay, I'll try to do a little poking around. Meantime, please send along the
> output from >"ompi_info" so we can see how this was configured and what built.
enclosed please find the requested information. It would be great to have an
workaround for 1.8 because with 1.8 our verific
Dear Ralph,
> it appears that 1.8 is much faster than 1.6.5 with the default settings, but
> slower when you set btl=tcp,self?
Precisely. However, with the default settings both versions are much slower
compared to other MPI distributions such as MPICH, MVAPICH, and proprietary
ones. The 'b
Dear Developers,
I have faced a performance degradation on multi-core single processor machine.
Specifically, in the most recent Open MPI v1.8 the initialization and process
startup stage became ~10x slower compared to v1.6.5. In order to measure
timings I have used the following code snippet
Hi Ralph,
> -mca orte_abort_non_zero_exit 0
Thank you for the hint. That it is exactly what I need! BTW, does it help if
one of the working node occasionally dies during the MPMD run?
With best regards,
Victor.
Dear OpenMPI Developers and Users,
I have general question on signal trapping/handling within mpiexec/mpirun. Let
me assume that I have 2 cores and I start two different (independent) prog1 and
prog2 programs in parallel via the mpirun/mpiexec strartup command:
mpiexec -n 1 prog1 : -n 1 prog2
Dear Brian,
thank you very much for your assistance and for the bug fix.
Regards,
Victor.
Since my question unanswered for 4 days, I repeat the original post.
Dear Developers,
I am running into memory problems when creating/allocating MPI's window and its
memory frequently. Below is listed a sample code reproducing the problem:
#include
#include
#define NEL8
#define NTIMES 10
Dear Developers,
I am running into memory problems when creating/allocating MPI's window and its
memory frequently. Below is listed a sample code reproducing the problem:
#include
#include
#define NEL8
#define NTIMES 100
int main (int argc,char *argv[]) {
int i;
doublew[
Hello,
I am wondering whether or not the MPI_Accumulate subroutine implemented in
OpenMPI v1.6.2 is capable to operate on derived datatypes? I wrote a very
simple test program for accumulating data from several process on master. The
program works properly only with predefined datatypes. In th
13 matches
Mail list logo