Siegmar,
https://github.com/open-mpi/ompi/commit/638a59adf35c1a7d2fdd8e8a86f5096bf7109d9d
has not yet been back-ported to the v2.x series
i made PR #750 https://github.com/open-mpi/ompi-release/pull/750 for that
Cheers,
Gilles
On Tue, Nov 10, 2015 at 12:19 AM, Siegmar Gross
wrote:
> Hi,
>
>
Received from Gilles Gouaillardet on Mon, Nov 09, 2015 at 07:20:51PM EST:
> Orion and Lev,
>
> here is the minimal patch that makes mpi4py tests happy again
>
> there might not be a v1.10.2, so you might have to manually apply
> that patch until v2.0.0
Confirming that the scatter/gather mpi4py t
Orion and Lev,
here is the minimal patch that makes mpi4py tests happy again
there might not be a v1.10.2, so you might have to manually apply that
patch until v2.0.0
Cheers,
Gilles
On 11/10/2015 8:45 AM, Gilles Gouaillardet wrote:
Orion and Lev,
i will have a look at this.
it seems we b
Orion and Lev,
i will have a look at this.
it seems we backported half of some changes and zero size messages are
no more handled correctly.
Cheers,
Gilles
On 11/10/2015 8:43 AM, Lev Givon wrote:
Received from Orion Poplawski on Mon, Nov 09, 2015 at 06:36:05PM EST:
We're seeing test failu
Received from Orion Poplawski on Mon, Nov 09, 2015 at 06:36:05PM EST:
> We're seeing test failures after bumping to 1.10.1 in Fedora (see below). Is
> anyone else seeing this? Any suggestions for debugging?
I see similar errors - you might want to mention it on the
mpi...@googlegroups.com mailin
We're seeing test failures after bumping to 1.10.1 in Fedora (see below). Is
anyone else seeing this? Any suggestions for debugging?
==
ERROR: testGatherv (test_cco_nb_vec.TestCCOVecSelf)
I meant different from the current shell, not different for different
processes, sorry.
Also I am aware of -x but it's not the right solution in this case because (a)
it's manual (b) it appears that anything set in bashrc that was unset in the
shell would be set for the program which I do not wa
Hi,
Is there any way with OpenMPI to propagate the current shell's environment to
the parallel program? I am looking for an equivalent way to how MPICH handles
environment variables
(https://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_How_do_I_pass_environment_variables_to_the_
Hello
I'm still observing abnormal behavior of 'mpirun' in the presence of
failures. I performed some test using a 32 phsycial machines. I run a
NAS benchmark using just one MPI processes per machine.
I inject faults by shut down the machines in two different ways:
1) logging into the machine
Hi,
today I tried to build openmpi-v2.x-dev-650-gb0365f9 on my
machines (Solaris 10 Sparc, Solaris 10 x86_64, and openSUSE
Linux 12.1 x86_64) with gcc-5.1.0 and Sun C 5.13 and I got the
following error on all machines with both compilers.
linpc1 openmpi-v2.x-dev-650-gb0365f9-Linux.x86_64.64_gcc
10 matches
Mail list logo