Fixed on master. The fix will be in 2.0.2 but you can apply it to 2.0.0 or 2.0.1:https://github.com/open-mpi/ompi/commit/e53de7ecbe9f034ab92c832330089cf7065181dc.patch-NathanOn Aug 25, 2016, at 07:31 AM, Joseph Schuchart wrote:Gilles,Thanks for your fast reply. I did some last minute changes to th
There is a bug in the code that keeps the dynamic regions sorted. Should have
it fixed shortly.
-Nathan
On Aug 25, 2016, at 07:46 AM, Christoph Niethammer wrote:
Hello,
The Error is not 100% reproducible for me every time but seems to disappear
entirely if one excludes
-mca osc ^rdma
or
-mc
Hi Joseph,
Thanks for reporting this problem.
There's an issue now (#2012)
https://github.com/open-mpi/ompi/issues/2012
to track this.
Howard
2016-08-25 7:44 GMT-06:00 Christoph Niethammer :
> Hello,
>
> The Error is not 100% reproducible for me every time but seems to
> disappear entirely i
The IOF fix PR for v2.0.1 was literally just merged a few minutes ago; it
wasn't in last night's tarball.
> On Aug 25, 2016, at 10:59 AM, r...@open-mpi.org wrote:
>
> ??? Weird - can you send me an updated output of that last test we ran?
>
>> On Aug 25, 2016, at 7:51 AM, Jingchao Zhang wrot
$ grep stdin_target orte/runtime/orte_globals.c
635:job->stdin_target = 0;
Recompiled with --enable-debug.
Same test case: 2 nodes, each node 10 cores. Rank 0 and mpirun command on the
same node.
$ mpirun -display-devel-map --mca iof_base_verbose 100 ./a.out < test.in &>
debug_info2.txt
??? Weird - can you send me an updated output of that last test we ran?
> On Aug 25, 2016, at 7:51 AM, Jingchao Zhang wrote:
>
> Hi Ralph,
>
> I saw the pull request and did a test with v2.0.1rc1, but the problem
> persists. Any ideas?
>
> Thanks,
>
> Dr. Jingchao Zhang
> Holland Computing C
Hi Ralph,
I saw the pull request and did a test with v2.0.1rc1, but the problem persists.
Any ideas?
Thanks,
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open-mpi.org
Sent: Wednesday,
Hello,
The Error is not 100% reproducible for me every time but seems to disappear
entirely if one excludes
-mca osc ^rdma
or
-mca btl ^openib
component.
The error is present in 2.0.0 and also 2.0.1rc1.
Best
Christoph Niethammer
- Original Message -
From: "Joseph Schuchart"
To: user
Gilles,
Thanks for your fast reply. I did some last minute changes to the
example code and didn't fully check the consistency of the output. Also,
thanks for pointing out the mistake in computing the neighbor rank. I am
attaching a fixed version.
Best
Joseph
On 08/25/2016 03:11 PM, Gilles G
Joseph,
I also noted the MPI_Info "alloc_shared_noncontig" is unused.
I do not know whether this is necessary or not, but if you do want to use
it, this should be used once with MPI_Win_create_dynamic
Cheers,
Gilles
On Thursday, August 25, 2016, Gilles Gouaillardet <
gilles.gouaillar...@gmail.c
Joseph,
at first glance, there is a memory corruption (!)
the first printf should be 0 -> 100, instead of 0 -> 3200
this is very odd because nelems is const, and the compiler might not even
allocate this variable.
I also noted some counter intuitive stuff in your test program
(which still looks
All,
It seems there is a regression in the handling of dynamic windows
between Open MPI 1.10.3 and 2.0.0. I am attaching a test case that works
fine with Open MPI 1.8.3 and fail with version 2.0.0 with the following
output:
===
[0] MPI_Get 0 -> 3200 on first memory region
[cl3fr1:7342] *** A
12 matches
Mail list logo