Re: [OMPI users] Valgrind reports a plenty of Invalid read's in osc_rdma_data_move.c

2015-01-15 Thread Victor Vysotskiy
Hi Nathan, surely, OpenMPI was compiled with the Valgrind support: %/opt/mpi/openmpi-1.8.4.dbg/bin/ompi_info | grep -i memchecker MCA memchecker: valgrind (MCA v2.0, API v2.0, Component v1.8.4) The following configure options were used: --enable-mem-debug --enable-debug --enable-memch

[OMPI users] Valgrind reports a plenty of Invalid read's in osc_rdma_data_move.c

2015-01-14 Thread Victor Vysotskiy
Hi, Our parallel applications behaves strange when it is compiled with Openmpi v1.8.4 on both Linux and Mac OS X platforms. The Valgrind reports memory problems in OpenMPI rather than in our code: =4440== Invalid read of size 1 ==4440==at 0xCAD6D37: ompi_osc_rdma_callback (osc_rdma_data_m

[OMPI users] Question on licensing

2014-06-17 Thread Victor Vysotskiy
Dear Developers, I would like to clarify a question about the OpenMPI license. We are working on academic code and our project is non-profitable. Now we are planning to sale the parallel binaries. The question is whether it is allowed to compile our project with OpenMPI (v1.8.2) and then dist

[OMPI users] FW: Performance issue of mpirun/mpi_init

2014-04-16 Thread Victor Vysotskiy
Hi, I just will confirm that the issue has been fixed. Specifically, with the latest OpenMPI v1.8.1a1r31402 we need now 2.5 hrs to complete verification and that timing is even slightly better compared to v1.6.5 (3hrs). Thank you very much for your assistance! With best regards, Victor. >I

Re: [OMPI users] Performance issue of mpirun/mpi_init

2014-04-10 Thread Victor Vysotskiy
Hi again, > Okay, I'll try to do a little poking around. Meantime, please send along the > output from >"ompi_info" so we can see how this was configured and what built. enclosed please find the requested information. It would be great to have an workaround for 1.8 because with 1.8 our verific

Re: [OMPI users] Performance issue of mpirun/mpi_init

2014-04-10 Thread Victor Vysotskiy
Dear Ralph, > it appears that 1.8 is much faster than 1.6.5 with the default settings, but > slower when you set btl=tcp,self? Precisely. However, with the default settings both versions are much slower compared to other MPI distributions such as MPICH, MVAPICH, and proprietary ones. The 'b

[OMPI users] Performance issue of mpirun/mpi_init

2014-04-10 Thread Victor Vysotskiy
Dear Developers, I have faced a performance degradation on multi-core single processor machine. Specifically, in the most recent Open MPI v1.8 the initialization and process startup stage became ~10x slower compared to v1.6.5. In order to measure timings I have used the following code snippet

Re: [OMPI users] Question on MPMD runs

2013-05-30 Thread Victor Vysotskiy
Hi Ralph, > -mca orte_abort_non_zero_exit 0 Thank you for the hint. That it is exactly what I need! BTW, does it help if one of the working node occasionally dies during the MPMD run? With best regards, Victor.

[OMPI users] Question on MPMD runs

2013-05-30 Thread Victor Vysotskiy
Dear OpenMPI Developers and Users, I have general question on signal trapping/handling within mpiexec/mpirun. Let me assume that I have 2 cores and I start two different (independent) prog1 and prog2 programs in parallel via the mpirun/mpiexec strartup command: mpiexec -n 1 prog1 : -n 1 prog2

Re: [OMPI users] [EXTERNAL] Possible memory leak(s) in OpenMPI 1.6.3?

2013-01-22 Thread Victor Vysotskiy
Dear Brian, thank you very much for your assistance and for the bug fix. Regards, Victor.

[OMPI users] Possible memory leak(s) in OpenMPI 1.6.3?

2013-01-21 Thread Victor Vysotskiy
Since my question unanswered for 4 days, I repeat the original post. Dear Developers, I am running into memory problems when creating/allocating MPI's window and its memory frequently. Below is listed a sample code reproducing the problem: #include #include #define NEL8 #define NTIMES 10

[OMPI users] Possible memory leak(s) in OpenMPI 1.6.3?

2013-01-17 Thread Victor Vysotskiy
Dear Developers, I am running into memory problems when creating/allocating MPI's window and its memory frequently. Below is listed a sample code reproducing the problem: #include #include #define NEL8 #define NTIMES 100 int main (int argc,char *argv[]) { int i; doublew[

[OMPI users] Is MPI_Accumulate compatible with an user-defined derived datatype?

2012-10-10 Thread Victor Vysotskiy
Hello, I am wondering whether or not the MPI_Accumulate subroutine implemented in OpenMPI v1.6.2 is capable to operate on derived datatypes? I wrote a very simple test program for accumulating data from several process on master. The program works properly only with predefined datatypes. In th