Thank you for your comment, Ralph.
I understand your explanation including "it's too late".
The ppr option is convinient for us because our environment is quite
hetero.
(It gives flexiblity to the number of procs)
I hope you do not deprecate ppr in the future release and aply my proposal
someda
I'm afraid it is too late for 1.7.4 as I have locked that down, barring any
last-second smoke test failures. I'll give this some thought for 1.7.5, but I'm
a little leery of the proposed change. The problem is that ppr comes in thru a
different MCA param than the "map-by" param, and hence we can
Hi Ralph, it seems you are rounding the final turn to release 1.7.4!
I hope this will be my final request for openmpi-1.7.4 as well.
I mostly use rr_mapper but sometimes use ppr_mapper. I have a simple
request to ask you to improve its usability. Namely, I propose to
remove redfining-policy-chec
Kewl - thanks!
On Jan 27, 2014, at 4:08 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Thanks, Ralph. I quickly checked the fix. It worked fine for me.
>
> Tetsuya Mishima
>
>> I fixed that in today's final cleanup
>>
>> On Jan 27, 2014, at 3:17 PM, tmish...@jcity.maeda.co.jp wrote:
>>
>>
>>
Thanks, Ralph. I quickly checked the fix. It worked fine for me.
Tetsuya Mishima
> I fixed that in today's final cleanup
>
> On Jan 27, 2014, at 3:17 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
>
> As for the NEWS - it is actually already correct. We default to map-by
> core, not slot, as of 1.7.
I fixed that in today's final cleanup
On Jan 27, 2014, at 3:17 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
>> As for the NEWS - it is actually already correct. We default to map-by
> core, not slot, as of 1.7.4.
>
> Is it correct? As far as I browse the source code, map-by slot is used if
> np <
> As for the NEWS - it is actually already correct. We default to map-by
core, not slot, as of 1.7.4.
Is it correct? As far as I browse the source code, map-by slot is used if
np <=2.
[mishima@manage openmpi-1.7.4rc2r30425]$ cat -n
orte/mca/rmaps/base/rmaps_base_map_job.c
...
107
On 01/27/2014 08:14 AM, Christoph Niethammer wrote:
Hello,
I am maintaining several Open MPI installations from the 1.6 and 1.7 series and
different compilers.
Open MPI is build with torque support and shared and static bindings.
Different Torque installations are present and managed via the mo
No need to cross-post to both lists; I'm just replying to the users list since
this is a user's-level question.
Unfortunately, Open MPI's level of thread support still isn't great.
Also, it sounds like you have a 100% threaded problem; MPI may not be the best
tool for you. MPI is more about in
On 01/27/2014 04:44 PM, Åke Sandgren wrote:
On 01/27/2014 04:31 PM, Jeff Squyres (jsquyres) wrote:
We *do* still have a problem in the mpi_f08 module that we probably
won't fix before 1.7.4 is released. Here's the ticket:
https://svn.open-mpi.org/trac/ompi/ticket/4157
Craig has a suggest
On 01/27/2014 04:31 PM, Jeff Squyres (jsquyres) wrote:
We *do* still have a problem in the mpi_f08 module that we probably won't fix
before 1.7.4 is released. Here's the ticket:
https://svn.open-mpi.org/trac/ompi/ticket/4157
Craig has a suggested patch, but a) I haven't had time to inves
We *do* still have a problem in the mpi_f08 module that we probably won't fix
before 1.7.4 is released. Here's the ticket:
https://svn.open-mpi.org/trac/ompi/ticket/4157
Craig has a suggested patch, but a) I haven't had time to investigate it yet,
and b) we believe that, at least so far, t
I've fixed the reporting flag - thanks!
As for the NEWS - it is actually already correct. We default to map-by core,
not slot, as of 1.7.4. However, if cpus-per-proc is given, we should probably
fall back to map-by slot, so I'll make that change
On Jan 26, 2014, at 3:02 PM, tmish...@jcity.mae
On 01/27/2014 03:46 PM, Jeff Squyres (jsquyres) wrote:
There has been a LOT of changes in the Fortran since we made rc1; we should
probably make rc2.
In the meantime, can you try with the latest 1.7 nightly snapshot?
http://www.open-mpi.org/nightly/v1.7/
That piece of the code looks the
There has been a LOT of changes in the Fortran since we made rc1; we should
probably make rc2.
In the meantime, can you try with the latest 1.7 nightly snapshot?
http://www.open-mpi.org/nightly/v1.7/
On Jan 27, 2014, at 9:28 AM, Åke Sandgren wrote:
> Hi!
>
> I just started trying to bui
On 01/27/2014 03:28 PM, Åke Sandgren wrote:
Hi!
I just started trying to build 1.7.4rc1 with the new Pathscale EkoPath5
compiler and stumbled onto this.
When building without --enable-mpi-f08-subarray-prototype i get into
problems with ompi/mpi/fortran/use-mpi-f08/mpi-f-interfaces-bind.h
It de
Hi!
I just started trying to build 1.7.4rc1 with the new Pathscale EkoPath5
compiler and stumbled onto this.
When building without --enable-mpi-f08-subarray-prototype i get into
problems with ompi/mpi/fortran/use-mpi-f08/mpi-f-interfaces-bind.h
It defines
subroutine
ompi_comm_create_keyval
Hi,
Am 27.01.2014 um 14:14 schrieb Christoph Niethammer:
> I am maintaining several Open MPI installations from the 1.6 and 1.7 series
> and different compilers.
...and installed in different locations I assume.
> Open MPI is build with torque support and shared and static bindings.
So you h
Hello,
I am maintaining several Open MPI installations from the 1.6 and 1.7 series and
different compilers.
Open MPI is build with torque support and shared and static bindings.
Different Torque installations are present and managed via the modules
environment.
Would it be possible to switch the
19 matches
Mail list logo