Coming back to this discussion after a long time let me clarify a few
issues that you have addressed.
1. Yes, the list of communicators in G is ordered in the same way on all
processes.
2. I am now using "mcComm != MPI_COMM_NULL" for participation check. I have
not seen much improvement but it's
Have you opened a bug report on github? Typically you will get much better
turnaround on issues when reported there. I, for one, don’t have time to check
the mailing list for bugs but I do regularly check the bug tracker. Assign the
bug to me when it is open.
-Nathan
> On Dec 4, 2017, at 9:32
Try another implementation like MPICH (or derivates thereof, e.g. MVAPICH2,
or Intel MPI). If you do not see the problem there, then it's pretty good
evidence that it is an Open-MPI bug.
In my experience, the developers of both Open-MPI and MPICH can be shamed
into fixing bugs when the competing
Hello,
Just following up on this from a few weeks ago since no one seems to
have responded. Does anyone have any suggestions as to whether this is
a genuine memory leak with OpenMPI or some other kind of problem I
need to debug?
For more real-world context: this was triggered by a CFD code we use
On Fri, 1 Dec 2017 21:32:35 +0100
Götz Waschk wrote:
...
> # Benchmarking Alltoall
> # #processes = 1024
> #
>#bytes #repetitions t_min[usec] t_max[usec] t_avg[usec]
> 0 1000 0.04 0.09