Re: [OMPI users] Is building with "--enable-mpi-thread-multiple" recommended?

2017-02-18 Thread Mark Dixon

On Fri, 17 Feb 2017, r...@open-mpi.org wrote:

Depends on the version, but if you are using something in the v2.x 
range, you should be okay with just one installed version


Thanks Ralph.

How good is MPI_THREAD_MULTIPLE support these days and how far up the 
wishlist is it, please?


We don't get many openmpi-specific queries from users but, other than core 
binding, it seems to be the thing we get asked about the most (I normally 
point those people at mvapich2 or intelmpi instead).


Cheers,

Mark
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


[OMPI users] MPI_THREAD_MULTIPLE: Fatal error on MPI_Win_create

2017-02-18 Thread Joseph Schuchart

All,

I am seeing a fatal error with OpenMPI 2.0.2 if requesting support for 
MPI_THREAD_MULTIPLE and afterwards creating a window using 
MPI_Win_create. I am attaching a small reproducer. The output I get is 
the following:


```
MPI_THREAD_MULTIPLE supported: yes
MPI_THREAD_MULTIPLE supported: yes
MPI_THREAD_MULTIPLE supported: yes
MPI_THREAD_MULTIPLE supported: yes
--
The OSC pt2pt component does not support MPI_THREAD_MULTIPLE in this 
release.

Workarounds are to run on a single node, or to use a system with an RDMA
capable network such as Infiniband.
--
[beryl:10705] *** An error occurred in MPI_Win_create
[beryl:10705] *** reported by process [2149974017,2]
[beryl:10705] *** on communicator MPI_COMM_WORLD
[beryl:10705] *** MPI_ERR_WIN: invalid window
[beryl:10705] *** MPI_ERRORS_ARE_FATAL (processes in this communicator 
will now abort,

[beryl:10705] ***and potentially your MPI job)
[beryl:10698] 3 more processes have sent help message help-osc-pt2pt.txt 
/ mpi-thread-multiple-not-supported
[beryl:10698] Set MCA parameter "orte_base_help_aggregate" to 0 to see 
all help / error messages
[beryl:10698] 3 more processes have sent help message 
help-mpi-errors.txt / mpi_errors_are_fatal

```

I am running on a single node (my laptop). Both OpenMPI and the 
application were compiled using GCC 5.3.0. Naturally, there is no 
support for Infiniband available. Should I signal OpenMPI that I am 
indeed running on a single node? If so, how can I do that? Can't this be 
detected by OpenMPI automatically? The test succeeds if I only request 
MPI_THREAD_SINGLE.


OpenMPI 2.0.2 has been configured using only 
--enable-mpi-thread-multiple and --prefix configure parameters. I am 
attaching the output of ompi_info.


Please let me know if you need any additional information.

Cheers,
Joseph

--
Dipl.-Inf. Joseph Schuchart
High Performance Computing Center Stuttgart (HLRS)
Nobelstr. 19
D-70569 Stuttgart

Tel.: +49(0)711-68565890
Fax: +49(0)711-6856832
E-Mail: schuch...@hlrs.de

#include 
#include 
#include 
#include 


int main(int argc, char **argv)
{
int provided;
MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);
printf("MPI_THREAD_MULTIPLE supported: %s\n", (provided == MPI_THREAD_MULTIPLE) ? "yes" : "no" );

MPI_Win win;
char *base = malloc(sizeof(uint64_t));

MPI_Win_create(base, sizeof(uint64_t), sizeof(uint64_t), MPI_INFO_NULL, MPI_COMM_WORLD, &win);

MPI_Win_free(&win);
free(base);

MPI_Finalize();

return 0;
}

 Package: Open MPI joseph@beryl Distribution
Open MPI: 2.0.2
  Open MPI repo revision: v2.0.1-348-ge291d0e
   Open MPI release date: Jan 31, 2017
Open RTE: 2.0.2
  Open RTE repo revision: v2.0.1-348-ge291d0e
   Open RTE release date: Jan 31, 2017
OPAL: 2.0.2
  OPAL repo revision: v2.0.1-348-ge291d0e
   OPAL release date: Jan 31, 2017
 MPI API: 3.1.0
Ident string: 2.0.2
  Prefix: /home/joseph/opt/openmpi-2.0.2
 Configured architecture: x86_64-unknown-linux-gnu
  Configure host: beryl
   Configured by: joseph
   Configured on: Wed Feb  1 11:03:54 CET 2017
  Configure host: beryl
Built by: joseph
Built on: Wed Feb  1 11:09:15 CET 2017
  Built host: beryl
  C bindings: yes
C++ bindings: no
 Fort mpif.h: no
Fort use mpi: no
   Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: no
 Fort mpi_f08 compliance: The mpi_f08 module was not built
  Fort mpi_f08 subarrays: no
   Java bindings: no
  Wrapper compiler rpath: runpath
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
  C compiler family name: GNU
  C compiler version: 5.4.1
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
   Fort compiler: none
   Fort compiler abs: none
 Fort ignore TKR: no
   Fort 08 assumed shape: no
  Fort optional args: no
  Fort INTERFACE: no
Fort ISO_FORTRAN_ENV: no
   Fort STORAGE_SIZE: no
  Fort BIND(C) (all): no
  Fort ISO_C_BINDING: no
 Fort SUBROUTINE BIND(C): no
   Fort TYPE,BIND(C): no
 Fort T,BIND(C,name="a"): no
Fort PRIVATE: no
  Fort PROTECTED: no
   Fort ABSTRACT: no
   Fort ASYNCHRONOUS: no
  Fort PROCEDURE: no
 Fort USE...ONLY: no
   Fort C_FUNLOC: no
 Fort f08 using wrappers: no
 Fort MPI_SIZEOF: no
 C profiling: yes
   C++ profiling: no
   Fort mpif.h profiling: no
  Fort use mpi profiling: no
   Fort use mpi_f08 prof: no
  C++ exceptions: no
  Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes, 
OMPI progress: no, ORTE progress: yes, Eve

Re: [OMPI users] Is building with "--enable-mpi-thread-multiple" recommended?

2017-02-18 Thread r...@open-mpi.org
We have been making a concerted effort to resolve outstanding issues as the 
interest in threaded applications has grown. It should be pretty good now, but 
we do see occasional bug reports, so it isn’t perfect.

> On Feb 18, 2017, at 12:14 AM, Mark Dixon  wrote:
> 
> On Fri, 17 Feb 2017, r...@open-mpi.org wrote:
> 
>> Depends on the version, but if you are using something in the v2.x range, 
>> you should be okay with just one installed version
> 
> Thanks Ralph.
> 
> How good is MPI_THREAD_MULTIPLE support these days and how far up the 
> wishlist is it, please?
> 
> We don't get many openmpi-specific queries from users but, other than core 
> binding, it seems to be the thing we get asked about the most (I normally 
> point those people at mvapich2 or intelmpi instead).
> 
> Cheers,
> 
> Mark
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error on MPI_Win_create

2017-02-18 Thread Howard Pritchard
Hi Joseph

What OS are you using when running the test?

Could you try running with

export OMPI_mca_osc=^pt2pt
and
export OMPI_mca_osc_base_verbose=10

This error message was put in to this OMPI release because this part of the
code has known problems when used multi threaded.



Joseph Schuchart  schrieb am Sa. 18. Feb. 2017 um 04:02:

> All,
>
> I am seeing a fatal error with OpenMPI 2.0.2 if requesting support for
> MPI_THREAD_MULTIPLE and afterwards creating a window using
> MPI_Win_create. I am attaching a small reproducer. The output I get is
> the following:
>
> ```
> MPI_THREAD_MULTIPLE supported: yes
> MPI_THREAD_MULTIPLE supported: yes
> MPI_THREAD_MULTIPLE supported: yes
> MPI_THREAD_MULTIPLE supported: yes
> --
> The OSC pt2pt component does not support MPI_THREAD_MULTIPLE in this
> release.
> Workarounds are to run on a single node, or to use a system with an RDMA
> capable network such as Infiniband.
> --
> [beryl:10705] *** An error occurred in MPI_Win_create
> [beryl:10705] *** reported by process [2149974017,2]
> [beryl:10705] *** on communicator MPI_COMM_WORLD
> [beryl:10705] *** MPI_ERR_WIN: invalid window
> [beryl:10705] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
> will now abort,
> [beryl:10705] ***and potentially your MPI job)
> [beryl:10698] 3 more processes have sent help message help-osc-pt2pt.txt
> / mpi-thread-multiple-not-supported
> [beryl:10698] Set MCA parameter "orte_base_help_aggregate" to 0 to see
> all help / error messages
> [beryl:10698] 3 more processes have sent help message
> help-mpi-errors.txt / mpi_errors_are_fatal
> ```
>
> I am running on a single node (my laptop). Both OpenMPI and the
> application were compiled using GCC 5.3.0. Naturally, there is no
> support for Infiniband available. Should I signal OpenMPI that I am
> indeed running on a single node? If so, how can I do that? Can't this be
> detected by OpenMPI automatically? The test succeeds if I only request
> MPI_THREAD_SINGLE.
>
> OpenMPI 2.0.2 has been configured using only
> --enable-mpi-thread-multiple and --prefix configure parameters. I am
> attaching the output of ompi_info.
>
> Please let me know if you need any additional information.
>
> Cheers,
> Joseph
>
> --
> Dipl.-Inf. Joseph Schuchart
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstr. 19
> D-70569 Stuttgart
>
> Tel.: +49(0)711-68565890
> Fax: +49(0)711-6856832
> E-Mail: schuch...@hlrs.de
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Is building with "--enable-mpi-thread-multiple" recommended?

2017-02-18 Thread Michel Lesoinne
I am also a proponent of the multiple thread support. For many reasons:
 - code simplification
 - easier support of computation/communication overlap with fewer
synchronization points
 - possibility of creating exception aware MPI Code (I think the MPI
standard cruelly lacks constructs for a natural clean handling of
application exceptions across processes)

So it is good to hear there is progress.

On Feb 18, 2017 7:43 AM, "r...@open-mpi.org"  wrote:

> We have been making a concerted effort to resolve outstanding issues as
> the interest in threaded applications has grown. It should be pretty good
> now, but we do see occasional bug reports, so it isn’t perfect.
>
> > On Feb 18, 2017, at 12:14 AM, Mark Dixon  wrote:
> >
> > On Fri, 17 Feb 2017, r...@open-mpi.org wrote:
> >
> >> Depends on the version, but if you are using something in the v2.x
> range, you should be okay with just one installed version
> >
> > Thanks Ralph.
> >
> > How good is MPI_THREAD_MULTIPLE support these days and how far up the
> wishlist is it, please?
> >
> > We don't get many openmpi-specific queries from users but, other than
> core binding, it seems to be the thing we get asked about the most (I
> normally point those people at mvapich2 or intelmpi instead).
> >
> > Cheers,
> >
> > Mark
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Is building with "--enable-mpi-thread-multiple" recommended?

2017-02-18 Thread r...@open-mpi.org
FWIW: have you taken a look at the event notification mechanisms in PMIx yet? 
The intent there, among other features, is to provide async notification of 
events generated either by the system (e.g., node failures and/or congestion) 
or other application processes.

https://pmix.github.io/master 

OMPI includes PMIx support beginning with OMPI v2.0, and various RMs are 
releasing their integrated support as well.
Ralph

> On Feb 18, 2017, at 10:07 AM, Michel Lesoinne  wrote:
> 
> I am also a proponent of the multiple thread support. For many reasons:
>  - code simplification
>  - easier support of computation/communication overlap with fewer 
> synchronization points
>  - possibility of creating exception aware MPI Code (I think the MPI standard 
> cruelly lacks constructs for a natural clean handling of application 
> exceptions across processes)
> 
> So it is good to hear there is progress.
> 
> On Feb 18, 2017 7:43 AM, "r...@open-mpi.org " 
> mailto:r...@open-mpi.org>> wrote:
> We have been making a concerted effort to resolve outstanding issues as the 
> interest in threaded applications has grown. It should be pretty good now, 
> but we do see occasional bug reports, so it isn’t perfect.
> 
> > On Feb 18, 2017, at 12:14 AM, Mark Dixon  > > wrote:
> >
> > On Fri, 17 Feb 2017, r...@open-mpi.org  wrote:
> >
> >> Depends on the version, but if you are using something in the v2.x range, 
> >> you should be okay with just one installed version
> >
> > Thanks Ralph.
> >
> > How good is MPI_THREAD_MULTIPLE support these days and how far up the 
> > wishlist is it, please?
> >
> > We don't get many openmpi-specific queries from users but, other than core 
> > binding, it seems to be the thing we get asked about the most (I normally 
> > point those people at mvapich2 or intelmpi instead).
> >
> > Cheers,
> >
> > Mark
> > ___
> > users mailing list
> > users@lists.open-mpi.org 
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users 
> > 
> 
> ___
> users mailing list
> users@lists.open-mpi.org 
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users