Hi,
In addition to Ralph's explanation, you can change the policy of
this behavior using MCA param orte_set_default_slots.
For example, by setting "none" you can disable the auto detection
of slots count, which means it's compatible with openmpi-1.6.X.
Regards,
Tetsuya Mishima
There is no problem with using numerical host names - we don’t care so long as
your system can resolve them. The difference you are seeing relates to a change
in behavior created during the 1.7 series. If you don’t specify the #slots on a
host, then we automatically set it to the number of detec
Hi,
I've been running with 1.6.5 for some time and am now trying out 1.8.8 (I'll
get to 1.10 soon).
I have found a difference in behavior and I'm wondering what is happening.
For special reasons, I have a host file which uses index values as logical
names:
0
1
2
3
These are prope
Hello,
The benefits of 'using' the MPI module over 'including' MPIF.H are clear
because of the sanity checks it performs, and I recently did some
testing with the module that seems to uncover a possible bug or design
flaw in OpenMPI's handling of arrays in user-defined data types.
Attached a
Mike Dubman writes:
> these flags available in master and v1.10 branches and make sure that ranks
> to core allocation is done starting from cpu socket closer to the HCA.
I'm confused by the 1.8.8 below, then. I haven't tried 1.10 since it
breaks binary compatibility and seemed to have core bin
More specifically: the 1.10.x series was the follow-on to 1.8.8. v1.10.0 is
available now; v1.10.1 will be available soon (we already have an rc for it;
another rc is coming soon).
> On Oct 7, 2015, at 7:30 AM, Gilles Gouaillardet
> wrote:
>
> Georg,
>
> there won't be a 1.8.9
>
> Cheers,
I’m a little nervous about this one, Gilles. It’s doing a lot more than just
addressing the immediate issue, and I’m concerned about any potential
side-effects that we don’t fully unocver prior to release.
I’d suggest a two-pronged approach:
1. use my alternative method for 1.10.1 to solve the
On 7 October 2015 at 14:54, simona bellavista wrote:
> I have written a small code in python 2.7 for launching 4 independent
> processes on the shell viasubprocess, using the library mpi4py. I am getting
> ORTE_ERROR_LOG and I would like to understand where it is happening and why.
>
> This is my
I have written a small code in python 2.7 for launching 4 independent
processes on the shell viasubprocess, using the library mpi4py. I am
getting ORTE_ERROR_LOG and I would like to understand where it is happening
and why.
This is my code:
#!/usr/bin/python
import subprocess
import re
import sys
Georg,
there won't be a 1.8.9
Cheers,
Gilles
On Wednesday, October 7, 2015, Georg Geiser wrote:
> Nathan,
>
> thanks for your rapid response. Do you consider to release 1.8.9?
> Actually, there is a bug tracking category for that version number. If so,
> please backport the fix.
>
> Best
>
>
Jeff,
there are quite a lot of changes, I did not update master yet (need extra
pairs of eyes to review this...)
so unless you want to make rc2 today and rc3 a week later, it is imho way
safer to wait for v1.10.2
Ralph,
any thoughts ?
Cheers,
Gilles
On Wednesday, October 7, 2015, Jeff Squyres
Nathan,
thanks for your rapid response. Do you consider to release 1.8.9?
Actually, there is a bug tracking category for that version number. If
so, please backport the fix.
Best
Georg
Am 02.10.2015 um 17:59 schrieb Nathan Hjelm:
Working on a fix now. Will be in master today then will move
Is this something that needs to go into v1.10.1?
If so, a PR needs to be filed ASAP. We were supposed to make the next 1.10.1
RC yesterday, but slipped to today due to some last second patches.
> On Oct 7, 2015, at 4:32 AM, Gilles Gouaillardet wrote:
>
> Marcin,
>
> here is a patch for the
Marcin,
here is a patch for the master, hopefully it fixes all the issues we
discussed
i will make sure it applies fine vs latest 1.10 tarball from tomorrow
Cheers,
Gilles
On 10/6/2015 7:22 PM, marcin.krotkiewski wrote:
Gilles,
Yes, it seemed that all was fine with binding in the patched
Hi,
I tried to build openmpi-v2.x-dev-415-g5c9b192 and
openmpi-dev-2696-gd579a07 on my machines (Solaris 10 Sparc, Solaris 10
x86_64, and openSUSE Linux 12.1 x86_64) with gcc-5.1.0 and Sun C 5.13.
I got the following error on all platforms with gcc and with Sun C only
on my Linux machine. I've al
15 matches
Mail list logo