Nathan, all,
Thanks for the quick fix. I can confirm that the behavior with multiple
windows is now as expected and as seen in 1.10.3.
Best
Joseph
On 08/25/2016 10:51 PM, Nathan Hjelm wrote:
Fixed on master. The fix will be in 2.0.2 but you can apply it to
2.0.0 or 2.0.1:
https://github.c
Fixed on master. The fix will be in 2.0.2 but you can apply it to 2.0.0 or 2.0.1:https://github.com/open-mpi/ompi/commit/e53de7ecbe9f034ab92c832330089cf7065181dc.patch-NathanOn Aug 25, 2016, at 07:31 AM, Joseph Schuchart wrote:Gilles,Thanks for your fast reply. I did some last minute changes to th
-mca btl ^openib
component.
The error is present in 2.0.0 and also 2.0.1rc1.
Best
Christoph Niethammer
- Original Message -
From: "Joseph Schuchart"
To: users@lists.open-mpi.org
Sent: Thursday, August 25, 2016 2:07:17 PM
Subject: [OMPI users] Regression: multiple memory
o: users@lists.open-mpi.org
> Sent: Thursday, August 25, 2016 2:07:17 PM
> Subject: [OMPI users] Regression: multiple memory regions in dynamic
> windows
>
> All,
>
> It seems there is a regression in the handling of dynamic windows
> between Open MPI 1.10.3 and 2.0.0. I am
"
To: users@lists.open-mpi.org
Sent: Thursday, August 25, 2016 2:07:17 PM
Subject: [OMPI users] Regression: multiple memory regions in dynamic windows
All,
It seems there is a regression in the handling of dynamic windows
between Open MPI 1.10.3 and 2.0.0. I am attaching a test case that works
Gilles,
Thanks for your fast reply. I did some last minute changes to the
example code and didn't fully check the consistency of the output. Also,
thanks for pointing out the mistake in computing the neighbor rank. I am
attaching a fixed version.
Best
Joseph
On 08/25/2016 03:11 PM, Gilles G
Joseph,
I also noted the MPI_Info "alloc_shared_noncontig" is unused.
I do not know whether this is necessary or not, but if you do want to use
it, this should be used once with MPI_Win_create_dynamic
Cheers,
Gilles
On Thursday, August 25, 2016, Gilles Gouaillardet <
gilles.gouaillar...@gmail.c
Joseph,
at first glance, there is a memory corruption (!)
the first printf should be 0 -> 100, instead of 0 -> 3200
this is very odd because nelems is const, and the compiler might not even
allocate this variable.
I also noted some counter intuitive stuff in your test program
(which still looks
All,
It seems there is a regression in the handling of dynamic windows
between Open MPI 1.10.3 and 2.0.0. I am attaching a test case that works
fine with Open MPI 1.8.3 and fail with version 2.0.0 with the following
output:
===
[0] MPI_Get 0 -> 3200 on first memory region
[cl3fr1:7342] *** A