an indexed datatype (i.e. not defining USE_INDEXED_DATATYPE
in the gather_test.c file), the bug dissapears. Using the indexed
datatype with LAM MPI 7.1.1 or MPICH2, we do not reproduce the bug
either, so it does seem to be an Open MPI issue.
--
Best regards,
Yvan
contain an
obvious mistake that I am missing ?
I initially though of possible alignment issues, but saw nothing in the
standard that requires that,
and the "malloc"-base variant exhibits the same behavior,while I assume
alignment to 64-bits for allocated arrays is the default.
Best reg
submitted the issue sooner...
Best regards,
Yvan Fournier
> Message: 5
> Date: Sat, 5 Nov 2016 22:08:32 +0900
> From: Gilles Gouaillardet
> To: Open MPI Users
> Subject: Re: [OMPI users] False positives and even failure with Open
> MPI and memchecker
> Message-ID:
&g
Hello,
I am not sure your issues are related, and I have not tested this
version of ICS, but I have actually had issues with an Intel compiler
build of Open MPI 1.4.3 on a cluster using Westmere processors and
Infiniband (Qlogic), using a Debian distribution, with our in-house code
(www.code-satur
then XE-6 machine. I am interested in
trying to improve or at least try to improve performance on Ethernet
clusters, and I may have a few suggestions for options you can test, but
this conversation should probably move to the Code_Saturne forum
(http://code-saturne.org), as we will go into some options of our linear
solvers which are specific to that code, not to Open MPI.
Best regards,
Yvan Fournier
57-58). It works with LAM 7.1.1 and MPICH2, but fails under Open MPI.
This is a (much) simplified extract from a part of Code_Saturne's
FVM library (http://rd.edf.com/code_saturne/), which otherwise works
fine on most data using Open MPI.
Best regards,
Yvan Fournier
hangs (in more complete code, after
writing data).
I encounter the same problem with Open MPI 1.2.6 and MPICH2 1.0.7, so
I may have misread the documentation, but I suspect a ROMIO bug.
Best regards,
Yvan Fournier
experimenting with the MPI-IO using explicit offsets,
individual pointers, and shared pointers, and have workarounds,
so I'll just avoid shared pointers on NFS.
Best regards,
Yvan Fournier
EDF R&D
On Sat, 2008-08-16 at 08:19 -0400, users-requ...@open-mpi.org wrote:
> D
gths[0]), MPI_BYTE, &status);
#if USE_FILE_TYPE
MPI_Type_free(&file_type);
#endif
-
Using the file type indexed datatype, I exhibit the bug with both
versions 1.3.0 and 1.3.2 of OpenMPI.
Best regards,
Yvan Fournier
#include
#include
#include
#include
#define
i_isend_irecv.c:7)
The first 2 warnings seem to relate to initialization, so are not a big issue,
but the last one occurs whenever I use MPI_Isend, so they are a more important
issue.
Using a version built without --enable-memchecker, I also have the two
initialization warnings, but not the
Hello,
Sorry, I forgot the attached test case in my previous message... :(
Best regards,
Yvan Fournier
- Mail transferred -
From: "yvan fournier"
To: users@lists.open-mpi.org
Sent: Sunday January 7 2018 01:43:16
Object: False positives with OpenMPI and memchecker
Hello,
...
Sorry for the (too-late) report...
Yvan
- Mail original -
From: "yvan fournier"
To: users@lists.open-mpi.org
Sent: Sunday January 7 2018 01:52:04
Object: Re: False positives with OpenMPI and memchecker (with attachment)
Hello,
Sorry, I forgot the attached test case in m
require a few extra hours of work.
If the bug is not reproduced in a simpler manner first, I will try
to build a simple program reproducing the bug within a week or 2,
but In the meantime, I just want to confirm Scott's observation
(hoping it is the same bug).
Best regards,
Yvan Fournie
file), the bug dissapears.
--
Best regards,
Yvan Fournier
ompi_datatype_bug.tar.gz
Description: application/compressed-tar
ompi_info output.
I have also encountered the bug on the "parent" case (similar, but
more complex) on my work machine (dual Xeon under Debian Sarge),
but I'll check this simpler test on it just in case.
Best regards,
Yvan Fournier
On Sun, 2006-07-09 at 12:00 -0400, users-requ.
15 matches
Mail list logo