Re: [OMPI users] Parallel MPI broadcasts (parameterized)

2017-12-04 Thread Konstantinos Konstantinidis
Coming back to this discussion after a long time let me clarify a few issues that you have addressed. 1. Yes, the list of communicators in G is ordered in the same way on all processes. 2. I am now using "mcComm != MPI_COMM_NULL" for participation check. I have not seen much improvement but it's

Re: [OMPI users] Possible memory leak in opal_free_list_grow_st

2017-12-04 Thread Nathan Hjelm
Have you opened a bug report on github? Typically you will get much better turnaround on issues when reported there. I, for one, don’t have time to check the mailing list for bugs but I do regularly check the bug tracker. Assign the bug to me when it is open. -Nathan > On Dec 4, 2017, at 9:32

Re: [OMPI users] Possible memory leak in opal_free_list_grow_st

2017-12-04 Thread Jeff Hammond
Try another implementation like MPICH (or derivates thereof, e.g. MVAPICH2, or Intel MPI). If you do not see the problem there, then it's pretty good evidence that it is an Open-MPI bug. In my experience, the developers of both Open-MPI and MPICH can be shamed into fixing bugs when the competing

Re: [OMPI users] Possible memory leak in opal_free_list_grow_st

2017-12-04 Thread Philip Blakely
Hello, Just following up on this from a few weeks ago since no one seems to have responded. Does anyone have any suggestions as to whether this is a genuine memory leak with OpenMPI or some other kind of problem I need to debug? For more real-world context: this was triggered by a CFD code we use

Re: [OMPI users] IMB-MPI1 hangs after 30 minutes with Open MPI 3.0.0 (was: Openmpi 1.10.4 crashes with 1024 processes)

2017-12-04 Thread Peter Kjellström
On Fri, 1 Dec 2017 21:32:35 +0100 Götz Waschk wrote: ... > # Benchmarking Alltoall > # #processes = 1024 > # >#bytes #repetitions t_min[usec] t_max[usec] t_avg[usec] > 0 1000 0.04 0.09