Thanks Ben and Lisandro.
You are right, my comparison based on the print() was misleading in terms
of precision so I probably didn't copy enough decimal places :)
I will also try to skip datatype specification and let MPI4py select the
datatype, and see what's going on.
On Tue, May 22, 2018 at
The beauty of floating point! Indeed, this is just a precision problem.
Using 16 significative digits in program2.py produces the same result as in
program1.py
@Jorge Pretty good example for your Numerical Calculus teaching, for those
kids that ask over and over again "What's the point of these na
PS. No need to: apt-get install libnuma-dev resolved the issue. Very helpful
diagnostics and advice output. Thanks!
> 23 мая 2018 г., в 19:23, Alexander Supalov
> написал(а):
>
> OK, I'll send it directly to your business address then. :)
>
> On Wed, May 23, 2018 at 6:59 PM, Jeff Squyres (jsq
OK, I'll send it directly to your business address then. :)
On Wed, May 23, 2018 at 6:59 PM, Jeff Squyres (jsquyres) wrote:
> Any way you want to send it is fine. :-)
>
> > On May 23, 2018, at 12:06 PM, Alexander Supalov <
> alexander.supa...@gmail.com> wrote:
> >
> > Hi Jeff,
> >
> > Sure. Thi
Any way you want to send it is fine. :-)
> On May 23, 2018, at 12:06 PM, Alexander Supalov
> wrote:
>
> Hi Jeff,
>
> Sure. This list will not accept messages larger than 250 KiB or so, however.
> Where else can I send the archive with the data you requested? It's just a
> tad bigger.
>
> B
I feel a little funny posting this but I have observed this problem now over
three different versions of OpenMPI (1.10.2, 2.0.3, 3.0.0) and have refrained
from asking about it before now because we always had a work-around. That may
not be the case now and feel like I’m missing something obvio
Hi Jeff,
Sure. This list will not accept messages larger than 250 KiB or so,
however. Where else can I send the archive with the data you requested?
It's just a tad bigger.
Best regards.
Alexander
On Wed, May 23, 2018 at 5:24 PM, Jeff Squyres (jsquyres) wrote:
> Alexander --
>
> Can you provi
We had a similar issue few months back. After investigation it turned out
to be related to NUMA balancing [1] being enabled by default on recent
releases of Linux-based OSes.
In our case turning off NUMA balancing fixed most of the performance
incoherences we had. You can check its status in /proc
Alexander --
Can you provide some more detail? The information listed here would be helpful:
https://www.open-mpi.org/community/help/
> On May 23, 2018, at 7:38 AM, Alexander Supalov
> wrote:
>
> Hi everybody,
>
> I've observed the process binding subpackage rejecting to be built and
This is very interesting. Thanks for providing a test code. I have two
suggestions for understanding this better.
1) Use MPI_Win_allocate_shared instead and measure the difference with and
without alloc_shared_noncontig. I think this info is not available for
MPI_Win_allocate because MPI_Win_sh
Odd. I wonder if it is something affected by your session directory. It might
be worth moving the segment to /dev/shm. I don’t expect it will have an impact
but you could try the following patch:
diff --git a/ompi/mca/osc/sm/osc_sm_component.c
b/ompi/mca/osc/sm/osc_sm_component.c
index f7211cd
I tested with Open MPI 3.1.0 and Open MPI 3.0.0, both compiled with GCC
7.1.0 on the Bull Cluster. I only ran on a single node but haven't
tested what happens if more than one node is involved.
Joseph
On 05/23/2018 02:04 PM, Nathan Hjelm wrote:
What Open MPI version are you using? Does this h
What Open MPI version are you using? Does this happen when you run on a single
node or multiple nodes?
-Nathan
> On May 23, 2018, at 4:45 AM, Joseph Schuchart wrote:
>
> All,
>
> We are observing some strange/interesting performance issues in accessing
> memory that has been allocated throug
Hi everybody,
I've observed the process binding subpackage rejecting to be built and
hence not working on my perfectly sound Ubuntu 16.04 LTS in Open MPI 2.0.2
and 2.1.0. A known old issue? A hint will be appreciated.
Best regards.
Alexander
___
users
All,
We are observing some strange/interesting performance issues in
accessing memory that has been allocated through MPI_Win_allocate. I am
attaching our test case, which allocates memory for 100M integer values
on each process both through malloc and MPI_Win_allocate and writes to
the local
15 matches
Mail list logo