[OMPI users] Automatic process mapping close to network card

2024-02-05 Thread Rene Puttin via users
Dear OpenMPI user group,

in former OpenMPI release versions I have used a combination of these two 
options:
--mca rmaps_dist_device
--map-by dist:span
in order to let OpenMPI automatically map the processes close to the network 
card. This was useful for PingPong benchmarks using only one process per node 
for example.

It seems that the option -map-by dist:span is no longer supported in OpenMPI 5 
releases. Is there a new alternative to achieve an automatic process pinning 
close to the Network card?

Thanks a lot for your support and regards,

Rene'
 __
   \ / _/  _/   NEC Deutschland GmbH
 /  \   / /   / HPCE Division
/ \  \ / /   /
   /   \/   /   Fritz-Vomfelde-Straße 14-16
__/ \___/___/   D-40457 Düsseldorf, Germany
http://www.nec.com/en/de

Rene' PuttinTel.:  +49 152 22851539
Benchmarking Expert Fax:   +49 0211 5369 199
Mail:  rene.put...@emea.nec.com

NEC Deutschland GmbH, Fritz-Vomfelde-Straße 14-16, D-40547 Düsseldorf 
Geschäftsführer: Christopher Richard Jackson, Handelsregister: Düsseldorf HRB 
57941, VAT ID: DE129424743



[OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread John Haiducek via users
I'm having problems running programs compiled against the OpenMPI 5.0.1
package provided by homebrew on MacOS (arm) 12.6.1.

When running a Fortran test program that simply calls MPI_init followed by
MPI_Finalize, I get the following output:

$ mpirun -n 2 ./mpi_init_test
--
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  PML add procs failed
  --> Returned "Not found" (-13) instead of "Success" (0)
--
--
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_mpi_instance_init failed
  --> Returned "Not found" (-13) instead of "Success" (0)
--
[haiducek-lt:0] *** An error occurred in MPI_Init
[haiducek-lt:0] *** reported by process [1905590273,1]
[haiducek-lt:0] *** on a NULL communicator
[haiducek-lt:0] *** Unknown error
[haiducek-lt:0] *** MPI_ERRORS_ARE_FATAL (processes in this
communicator will now abort,
[haiducek-lt:0] ***and MPI will try to terminate your MPI job as
well)
--
prterun detected that one or more processes exited with non-zero status,
thus causing the job to be terminated. The first process to do so was:

   Process name: [prterun-haiducek-lt-15584@1,1] Exit code:14
--

I'm not sure whether this is the result of a bug in OpenMPI, in the
homebrew package, or a misconfiguration of my system. Any suggestions for
troubleshooting this?


Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread George Bosilca via users
OMPI seems unable to create a communication medium between your processes.
There are few known issues on OSX, please read
https://github.com/open-mpi/ompi/issues/12273 for more info.

Can you provide the header of the ompi_info command. What I'm interested on
is the part about `Configure command line:`

George.


On Mon, Feb 5, 2024 at 12:18 PM John Haiducek via users <
users@lists.open-mpi.org> wrote:

> I'm having problems running programs compiled against the OpenMPI 5.0.1
> package provided by homebrew on MacOS (arm) 12.6.1.
>
> When running a Fortran test program that simply calls MPI_init followed by
> MPI_Finalize, I get the following output:
>
> $ mpirun -n 2 ./mpi_init_test
> --
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
>   PML add procs failed
>   --> Returned "Not found" (-13) instead of "Success" (0)
> --
> --
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
>   ompi_mpi_init: ompi_mpi_instance_init failed
>   --> Returned "Not found" (-13) instead of "Success" (0)
> --
> [haiducek-lt:0] *** An error occurred in MPI_Init
> [haiducek-lt:0] *** reported by process [1905590273,1]
> [haiducek-lt:0] *** on a NULL communicator
> [haiducek-lt:0] *** Unknown error
> [haiducek-lt:0] *** MPI_ERRORS_ARE_FATAL (processes in this
> communicator will now abort,
> [haiducek-lt:0] ***and MPI will try to terminate your MPI job as
> well)
> --
> prterun detected that one or more processes exited with non-zero status,
> thus causing the job to be terminated. The first process to do so was:
>
>Process name: [prterun-haiducek-lt-15584@1,1] Exit code:14
> --
>
> I'm not sure whether this is the result of a bug in OpenMPI, in the
> homebrew package, or a misconfiguration of my system. Any suggestions for
> troubleshooting this?
>


Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread John Haiducek via users
Thanks, George, that issue you linked certainly looks potentially related.

Output from ompi_info:

 Package: Open MPI brew@Monterey-arm64.local Distribution
Open MPI: 5.0.1
  Open MPI repo revision: v5.0.1
   Open MPI release date: Dec 20, 2023
 MPI API: 3.1.0
Ident string: 5.0.1
  Prefix: /opt/homebrew/Cellar/open-mpi/5.0.1
 Configured architecture: aarch64-apple-darwin21.6.0
   Configured by: brew
   Configured on: Wed Dec 20 22:18:10 UTC 2023
  Configure host: Monterey-arm64.local
  Configure command line: '--disable-debug' '--disable-dependency-tracking'
  '--prefix=/opt/homebrew/Cellar/open-mpi/5.0.1'
  '--libdir=/opt/homebrew/Cellar/open-mpi/5.0.1/lib'
  '--disable-silent-rules' '--enable-ipv6'
  '--enable-mca-no-build=reachable-netlink'
  '--sysconfdir=/opt/homebrew/etc'
  '--with-hwloc=/opt/homebrew/opt/hwloc'
  '--with-libevent=/opt/homebrew/opt/libevent'
  '--with-pmix=/opt/homebrew/opt/pmix' '--with-sge'
Built by: brew
Built on: Wed Dec 20 22:18:10 UTC 2023
  Built host: Monterey-arm64.local
  C bindings: yes
 Fort mpif.h: yes (single underscore)
Fort use mpi: yes (full: ignore TKR)
   Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: yes
 Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
  limitations in the gfortran compiler and/or Open
  MPI, does not support the following: array
  subsections, direct passthru (where possible) to
  underlying Open MPI's C functionality
  Fort mpi_f08 subarrays: no
   Java bindings: no
  Wrapper compiler rpath: unnecessary
  C compiler: clang
 C compiler absolute: clang
  C compiler family name: CLANG
  C compiler version: 14.0.0 (clang-1400.0.29.202)
C++ compiler: clang++
   C++ compiler absolute: clang++
   Fort compiler: gfortran
   Fort compiler abs: /opt/homebrew/opt/gcc/bin/gfortran
 Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
   Fort 08 assumed shape: yes
  Fort optional args: yes
  Fort INTERFACE: yes
Fort ISO_FORTRAN_ENV: yes
   Fort STORAGE_SIZE: yes
  Fort BIND(C) (all): yes
  Fort ISO_C_BINDING: yes
 Fort SUBROUTINE BIND(C): yes
   Fort TYPE,BIND(C): yes
 Fort T,BIND(C,name="a"): yes
Fort PRIVATE: yes
   Fort ABSTRACT: yes
   Fort ASYNCHRONOUS: yes
  Fort PROCEDURE: yes
 Fort USE...ONLY: yes
   Fort C_FUNLOC: yes
 Fort f08 using wrappers: yes
 Fort MPI_SIZEOF: yes
 C profiling: yes
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: yes
  Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support:
yes,
  OMPI progress: no, Event lib: yes)
   Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: yes
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
  dl support: yes
   Heterogeneous support: no
   MPI_WTIME support: native
 Symbol vis. support: yes
   Host topology support: yes
IPv6 support: yes
  MPI extensions: affinity, cuda, ftmpi, rocm, shortfloat
 Fault Tolerance support: yes
  FT MPI support: yes
  MPI_MAX_PROCESSOR_NAME: 256
MPI_MAX_ERROR_STRING: 256
 MPI_MAX_OBJECT_NAME: 64
MPI_MAX_INFO_KEY: 36
MPI_MAX_INFO_VAL: 256
   MPI_MAX_PORT_NAME: 1024
  MPI_MAX_DATAREP_STRING: 128
 MCA accelerator: null (MCA v2.1.0, API v1.0.0, Component v5.0.1)
   MCA allocator: basic (MCA v2.1.0, API v2.0.0, Component v5.0.1)
   MCA allocator: bucket (MCA v2.1.0, API v2.0.0, Component v5.0.1)
   MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component
v5.0.1)
 MCA btl: self (MCA v2.1.0, API v3.3.0, Component v5.0.1)
 MCA btl: sm (MCA v2.1.0, API v3.3.0, Component v5.0.1)
 MCA btl: tcp (MCA v2.1.0, API v3.3.0, Component v5.0.1)
  MCA dl: dlopen (MCA v2.1.0, API v1.0.0, Component v5.0.1)
  MCA if: bsdx_ipv6 (MCA v2.1.0, API v2.0.0, Component
  v5.0.1)
  MCA if: posix_ipv4 (MCA v2.1.0, API v2.0.0, Component
  v5.0.1)
 MCA installdirs: env (MCA v2.1.0, API v2.0.0, Component v5.0.1)
 MCA installdirs: config (MCA v2.1.0, API v2.0.0, Component v5.0.1)
   MCA mpool: hugepage (MCA v2.1.0, API v3.1.0, Component
v5.0.1)
 MCA patcher: overwrite (MCA v2.1.0, API v1.0.0, Component

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread John Haiducek via users
Adding '--pmixmca ptl_tcp_if_include lo0' to the mpirun argument list seems
to fix (or at least work around) the problem.

On Mon, Feb 5, 2024 at 1:49 PM John Haiducek  wrote:

> Thanks, George, that issue you linked certainly looks potentially related.
>
> Output from ompi_info:
>
>  Package: Open MPI brew@Monterey-arm64.local Distribution
> Open MPI: 5.0.1
>   Open MPI repo revision: v5.0.1
>Open MPI release date: Dec 20, 2023
>  MPI API: 3.1.0
> Ident string: 5.0.1
>   Prefix: /opt/homebrew/Cellar/open-mpi/5.0.1
>  Configured architecture: aarch64-apple-darwin21.6.0
>Configured by: brew
>Configured on: Wed Dec 20 22:18:10 UTC 2023
>   Configure host: Monterey-arm64.local
>   Configure command line: '--disable-debug' '--disable-dependency-tracking'
>   '--prefix=/opt/homebrew/Cellar/open-mpi/5.0.1'
>
> '--libdir=/opt/homebrew/Cellar/open-mpi/5.0.1/lib'
>   '--disable-silent-rules' '--enable-ipv6'
>   '--enable-mca-no-build=reachable-netlink'
>   '--sysconfdir=/opt/homebrew/etc'
>   '--with-hwloc=/opt/homebrew/opt/hwloc'
>   '--with-libevent=/opt/homebrew/opt/libevent'
>   '--with-pmix=/opt/homebrew/opt/pmix' '--with-sge'
> Built by: brew
> Built on: Wed Dec 20 22:18:10 UTC 2023
>   Built host: Monterey-arm64.local
>   C bindings: yes
>  Fort mpif.h: yes (single underscore)
> Fort use mpi: yes (full: ignore TKR)
>Fort use mpi size: deprecated-ompi-info-value
> Fort use mpi_f08: yes
>  Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
>   limitations in the gfortran compiler and/or Open
>   MPI, does not support the following: array
>   subsections, direct passthru (where possible) to
>   underlying Open MPI's C functionality
>   Fort mpi_f08 subarrays: no
>Java bindings: no
>   Wrapper compiler rpath: unnecessary
>   C compiler: clang
>  C compiler absolute: clang
>   C compiler family name: CLANG
>   C compiler version: 14.0.0 (clang-1400.0.29.202)
> C++ compiler: clang++
>C++ compiler absolute: clang++
>Fort compiler: gfortran
>Fort compiler abs: /opt/homebrew/opt/gcc/bin/gfortran
>  Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
>Fort 08 assumed shape: yes
>   Fort optional args: yes
>   Fort INTERFACE: yes
> Fort ISO_FORTRAN_ENV: yes
>Fort STORAGE_SIZE: yes
>   Fort BIND(C) (all): yes
>   Fort ISO_C_BINDING: yes
>  Fort SUBROUTINE BIND(C): yes
>Fort TYPE,BIND(C): yes
>  Fort T,BIND(C,name="a"): yes
> Fort PRIVATE: yes
>Fort ABSTRACT: yes
>Fort ASYNCHRONOUS: yes
>   Fort PROCEDURE: yes
>  Fort USE...ONLY: yes
>Fort C_FUNLOC: yes
>  Fort f08 using wrappers: yes
>  Fort MPI_SIZEOF: yes
>  C profiling: yes
>Fort mpif.h profiling: yes
>   Fort use mpi profiling: yes
>Fort use mpi_f08 prof: yes
>   Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support:
> yes,
>   OMPI progress: no, Event lib: yes)
>Sparse Groups: no
>   Internal debug support: no
>   MPI interface warnings: yes
>  MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>   dl support: yes
>Heterogeneous support: no
>MPI_WTIME support: native
>  Symbol vis. support: yes
>Host topology support: yes
> IPv6 support: yes
>   MPI extensions: affinity, cuda, ftmpi, rocm, shortfloat
>  Fault Tolerance support: yes
>   FT MPI support: yes
>   MPI_MAX_PROCESSOR_NAME: 256
> MPI_MAX_ERROR_STRING: 256
>  MPI_MAX_OBJECT_NAME: 64
> MPI_MAX_INFO_KEY: 36
> MPI_MAX_INFO_VAL: 256
>MPI_MAX_PORT_NAME: 1024
>   MPI_MAX_DATAREP_STRING: 128
>  MCA accelerator: null (MCA v2.1.0, API v1.0.0, Component v5.0.1)
>MCA allocator: basic (MCA v2.1.0, API v2.0.0, Component v5.0.1)
>MCA allocator: bucket (MCA v2.1.0, API v2.0.0, Component v5.0.1)
>MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component
> v5.0.1)
>  MCA btl: self (MCA v2.1.0, API v3.3.0, Component v5.0.1)
>  MCA btl: sm (MCA v2.1.0, API v3.3.0, Component v5.0.1)
>  MCA btl: tcp (MCA v2.1.0, API v3.3.0, Component v5.0.1)
>   MCA dl: dlopen (MCA v2.1.0, API v1.0.0, Component v5.0.1)
>   MCA if: bsdx_ipv6 (MCA v2.1.0, API v2.0.0, Component
>   v5.0.1)
>   MCA if: posix_ipv4 

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread George Bosilca via users
That would be something @Ralph Castain  needs to be
looking at as he declared in a previous discussion that `lo` was the
default for PMIX and we now have 2 reports stating otherwise.

George.


On Mon, Feb 5, 2024 at 3:15 PM John Haiducek  wrote:

> Adding '--pmixmca ptl_tcp_if_include lo0' to the mpirun argument list
> seems to fix (or at least work around) the problem.
>
> On Mon, Feb 5, 2024 at 1:49 PM John Haiducek  wrote:
>
>> Thanks, George, that issue you linked certainly looks potentially related.
>>
>> Output from ompi_info:
>>
>>  Package: Open MPI brew@Monterey-arm64.local Distribution
>> Open MPI: 5.0.1
>>   Open MPI repo revision: v5.0.1
>>Open MPI release date: Dec 20, 2023
>>  MPI API: 3.1.0
>> Ident string: 5.0.1
>>   Prefix: /opt/homebrew/Cellar/open-mpi/5.0.1
>>  Configured architecture: aarch64-apple-darwin21.6.0
>>Configured by: brew
>>Configured on: Wed Dec 20 22:18:10 UTC 2023
>>   Configure host: Monterey-arm64.local
>>   Configure command line: '--disable-debug'
>> '--disable-dependency-tracking'
>>   '--prefix=/opt/homebrew/Cellar/open-mpi/5.0.1'
>>
>> '--libdir=/opt/homebrew/Cellar/open-mpi/5.0.1/lib'
>>   '--disable-silent-rules' '--enable-ipv6'
>>   '--enable-mca-no-build=reachable-netlink'
>>   '--sysconfdir=/opt/homebrew/etc'
>>   '--with-hwloc=/opt/homebrew/opt/hwloc'
>>   '--with-libevent=/opt/homebrew/opt/libevent'
>>   '--with-pmix=/opt/homebrew/opt/pmix'
>> '--with-sge'
>> Built by: brew
>> Built on: Wed Dec 20 22:18:10 UTC 2023
>>   Built host: Monterey-arm64.local
>>   C bindings: yes
>>  Fort mpif.h: yes (single underscore)
>> Fort use mpi: yes (full: ignore TKR)
>>Fort use mpi size: deprecated-ompi-info-value
>> Fort use mpi_f08: yes
>>  Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
>>   limitations in the gfortran compiler and/or Open
>>   MPI, does not support the following: array
>>   subsections, direct passthru (where possible) to
>>   underlying Open MPI's C functionality
>>   Fort mpi_f08 subarrays: no
>>Java bindings: no
>>   Wrapper compiler rpath: unnecessary
>>   C compiler: clang
>>  C compiler absolute: clang
>>   C compiler family name: CLANG
>>   C compiler version: 14.0.0 (clang-1400.0.29.202)
>> C++ compiler: clang++
>>C++ compiler absolute: clang++
>>Fort compiler: gfortran
>>Fort compiler abs: /opt/homebrew/opt/gcc/bin/gfortran
>>  Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
>>Fort 08 assumed shape: yes
>>   Fort optional args: yes
>>   Fort INTERFACE: yes
>> Fort ISO_FORTRAN_ENV: yes
>>Fort STORAGE_SIZE: yes
>>   Fort BIND(C) (all): yes
>>   Fort ISO_C_BINDING: yes
>>  Fort SUBROUTINE BIND(C): yes
>>Fort TYPE,BIND(C): yes
>>  Fort T,BIND(C,name="a"): yes
>> Fort PRIVATE: yes
>>Fort ABSTRACT: yes
>>Fort ASYNCHRONOUS: yes
>>   Fort PROCEDURE: yes
>>  Fort USE...ONLY: yes
>>Fort C_FUNLOC: yes
>>  Fort f08 using wrappers: yes
>>  Fort MPI_SIZEOF: yes
>>  C profiling: yes
>>Fort mpif.h profiling: yes
>>   Fort use mpi profiling: yes
>>Fort use mpi_f08 prof: yes
>>   Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support:
>> yes,
>>   OMPI progress: no, Event lib: yes)
>>Sparse Groups: no
>>   Internal debug support: no
>>   MPI interface warnings: yes
>>  MPI parameter check: runtime
>> Memory profiling support: no
>> Memory debugging support: no
>>   dl support: yes
>>Heterogeneous support: no
>>MPI_WTIME support: native
>>  Symbol vis. support: yes
>>Host topology support: yes
>> IPv6 support: yes
>>   MPI extensions: affinity, cuda, ftmpi, rocm, shortfloat
>>  Fault Tolerance support: yes
>>   FT MPI support: yes
>>   MPI_MAX_PROCESSOR_NAME: 256
>> MPI_MAX_ERROR_STRING: 256
>>  MPI_MAX_OBJECT_NAME: 64
>> MPI_MAX_INFO_KEY: 36
>> MPI_MAX_INFO_VAL: 256
>>MPI_MAX_PORT_NAME: 1024
>>   MPI_MAX_DATAREP_STRING: 128
>>  MCA accelerator: null (MCA v2.1.0, API v1.0.0, Component v5.0.1)
>>MCA allocator: basic (MCA v2.1.0, API v2.0.0, Component v5.0.1)
>>MCA allocator: bucket (MCA v2.1.0, API v2.0.0, Component
>> v5.0.1)
>>MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component
>> v5.0.1)
>>  MCA btl: self (MCA v2.1.0, API v3.3.0, Component v5.0.1)
>> 

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread John Hearns via users
Stupid question... Why is it going 'out' to the loopback address? Is shared
memory not being used these days?

On Mon, Feb 5, 2024, 8:31 PM John Haiducek via users <
users@lists.open-mpi.org> wrote:

> Adding '--pmixmca ptl_tcp_if_include lo0' to the mpirun argument list
> seems to fix (or at least work around) the problem.
>
> On Mon, Feb 5, 2024 at 1:49 PM John Haiducek  wrote:
>
>> Thanks, George, that issue you linked certainly looks potentially related.
>>
>> Output from ompi_info:
>>
>>  Package: Open MPI brew@Monterey-arm64.local Distribution
>> Open MPI: 5.0.1
>>   Open MPI repo revision: v5.0.1
>>Open MPI release date: Dec 20, 2023
>>  MPI API: 3.1.0
>> Ident string: 5.0.1
>>   Prefix: /opt/homebrew/Cellar/open-mpi/5.0.1
>>  Configured architecture: aarch64-apple-darwin21.6.0
>>Configured by: brew
>>Configured on: Wed Dec 20 22:18:10 UTC 2023
>>   Configure host: Monterey-arm64.local
>>   Configure command line: '--disable-debug'
>> '--disable-dependency-tracking'
>>   '--prefix=/opt/homebrew/Cellar/open-mpi/5.0.1'
>>
>> '--libdir=/opt/homebrew/Cellar/open-mpi/5.0.1/lib'
>>   '--disable-silent-rules' '--enable-ipv6'
>>   '--enable-mca-no-build=reachable-netlink'
>>   '--sysconfdir=/opt/homebrew/etc'
>>   '--with-hwloc=/opt/homebrew/opt/hwloc'
>>   '--with-libevent=/opt/homebrew/opt/libevent'
>>   '--with-pmix=/opt/homebrew/opt/pmix'
>> '--with-sge'
>> Built by: brew
>> Built on: Wed Dec 20 22:18:10 UTC 2023
>>   Built host: Monterey-arm64.local
>>   C bindings: yes
>>  Fort mpif.h: yes (single underscore)
>> Fort use mpi: yes (full: ignore TKR)
>>Fort use mpi size: deprecated-ompi-info-value
>> Fort use mpi_f08: yes
>>  Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
>>   limitations in the gfortran compiler and/or Open
>>   MPI, does not support the following: array
>>   subsections, direct passthru (where possible) to
>>   underlying Open MPI's C functionality
>>   Fort mpi_f08 subarrays: no
>>Java bindings: no
>>   Wrapper compiler rpath: unnecessary
>>   C compiler: clang
>>  C compiler absolute: clang
>>   C compiler family name: CLANG
>>   C compiler version: 14.0.0 (clang-1400.0.29.202)
>> C++ compiler: clang++
>>C++ compiler absolute: clang++
>>Fort compiler: gfortran
>>Fort compiler abs: /opt/homebrew/opt/gcc/bin/gfortran
>>  Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
>>Fort 08 assumed shape: yes
>>   Fort optional args: yes
>>   Fort INTERFACE: yes
>> Fort ISO_FORTRAN_ENV: yes
>>Fort STORAGE_SIZE: yes
>>   Fort BIND(C) (all): yes
>>   Fort ISO_C_BINDING: yes
>>  Fort SUBROUTINE BIND(C): yes
>>Fort TYPE,BIND(C): yes
>>  Fort T,BIND(C,name="a"): yes
>> Fort PRIVATE: yes
>>Fort ABSTRACT: yes
>>Fort ASYNCHRONOUS: yes
>>   Fort PROCEDURE: yes
>>  Fort USE...ONLY: yes
>>Fort C_FUNLOC: yes
>>  Fort f08 using wrappers: yes
>>  Fort MPI_SIZEOF: yes
>>  C profiling: yes
>>Fort mpif.h profiling: yes
>>   Fort use mpi profiling: yes
>>Fort use mpi_f08 prof: yes
>>   Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support:
>> yes,
>>   OMPI progress: no, Event lib: yes)
>>Sparse Groups: no
>>   Internal debug support: no
>>   MPI interface warnings: yes
>>  MPI parameter check: runtime
>> Memory profiling support: no
>> Memory debugging support: no
>>   dl support: yes
>>Heterogeneous support: no
>>MPI_WTIME support: native
>>  Symbol vis. support: yes
>>Host topology support: yes
>> IPv6 support: yes
>>   MPI extensions: affinity, cuda, ftmpi, rocm, shortfloat
>>  Fault Tolerance support: yes
>>   FT MPI support: yes
>>   MPI_MAX_PROCESSOR_NAME: 256
>> MPI_MAX_ERROR_STRING: 256
>>  MPI_MAX_OBJECT_NAME: 64
>> MPI_MAX_INFO_KEY: 36
>> MPI_MAX_INFO_VAL: 256
>>MPI_MAX_PORT_NAME: 1024
>>   MPI_MAX_DATAREP_STRING: 128
>>  MCA accelerator: null (MCA v2.1.0, API v1.0.0, Component v5.0.1)
>>MCA allocator: basic (MCA v2.1.0, API v2.0.0, Component v5.0.1)
>>MCA allocator: bucket (MCA v2.1.0, API v2.0.0, Component
>> v5.0.1)
>>MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component
>> v5.0.1)
>>  MCA btl: self (MCA v2.1.0, API v3.3.0, Component v5.0.1)
>>  MCA btl: sm (MCA v2.1.0, API v3.3.0

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread George Bosilca via users
That's not for the MPI communications but for the process management part
(PRRTE/PMIX). If forcing the PTL to `lo` worked it mostly indicates that
the shared memory in OMPI was able to be set up correctly.

  George.


On Mon, Feb 5, 2024 at 3:47 PM John Hearns  wrote:

> Stupid question... Why is it going 'out' to the loopback address? Is
> shared memory not being used these days?
>
> On Mon, Feb 5, 2024, 8:31 PM John Haiducek via users <
> users@lists.open-mpi.org> wrote:
>
>> Adding '--pmixmca ptl_tcp_if_include lo0' to the mpirun argument list
>> seems to fix (or at least work around) the problem.
>>
>> On Mon, Feb 5, 2024 at 1:49 PM John Haiducek  wrote:
>>
>>> Thanks, George, that issue you linked certainly looks potentially
>>> related.
>>>
>>> Output from ompi_info:
>>>
>>>  Package: Open MPI brew@Monterey-arm64.local
>>> Distribution
>>> Open MPI: 5.0.1
>>>   Open MPI repo revision: v5.0.1
>>>Open MPI release date: Dec 20, 2023
>>>  MPI API: 3.1.0
>>> Ident string: 5.0.1
>>>   Prefix: /opt/homebrew/Cellar/open-mpi/5.0.1
>>>  Configured architecture: aarch64-apple-darwin21.6.0
>>>Configured by: brew
>>>Configured on: Wed Dec 20 22:18:10 UTC 2023
>>>   Configure host: Monterey-arm64.local
>>>   Configure command line: '--disable-debug'
>>> '--disable-dependency-tracking'
>>>   '--prefix=/opt/homebrew/Cellar/open-mpi/5.0.1'
>>>
>>> '--libdir=/opt/homebrew/Cellar/open-mpi/5.0.1/lib'
>>>   '--disable-silent-rules' '--enable-ipv6'
>>>   '--enable-mca-no-build=reachable-netlink'
>>>   '--sysconfdir=/opt/homebrew/etc'
>>>   '--with-hwloc=/opt/homebrew/opt/hwloc'
>>>   '--with-libevent=/opt/homebrew/opt/libevent'
>>>   '--with-pmix=/opt/homebrew/opt/pmix'
>>> '--with-sge'
>>> Built by: brew
>>> Built on: Wed Dec 20 22:18:10 UTC 2023
>>>   Built host: Monterey-arm64.local
>>>   C bindings: yes
>>>  Fort mpif.h: yes (single underscore)
>>> Fort use mpi: yes (full: ignore TKR)
>>>Fort use mpi size: deprecated-ompi-info-value
>>> Fort use mpi_f08: yes
>>>  Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
>>>   limitations in the gfortran compiler and/or
>>> Open
>>>   MPI, does not support the following: array
>>>   subsections, direct passthru (where possible)
>>> to
>>>   underlying Open MPI's C functionality
>>>   Fort mpi_f08 subarrays: no
>>>Java bindings: no
>>>   Wrapper compiler rpath: unnecessary
>>>   C compiler: clang
>>>  C compiler absolute: clang
>>>   C compiler family name: CLANG
>>>   C compiler version: 14.0.0 (clang-1400.0.29.202)
>>> C++ compiler: clang++
>>>C++ compiler absolute: clang++
>>>Fort compiler: gfortran
>>>Fort compiler abs: /opt/homebrew/opt/gcc/bin/gfortran
>>>  Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
>>>Fort 08 assumed shape: yes
>>>   Fort optional args: yes
>>>   Fort INTERFACE: yes
>>> Fort ISO_FORTRAN_ENV: yes
>>>Fort STORAGE_SIZE: yes
>>>   Fort BIND(C) (all): yes
>>>   Fort ISO_C_BINDING: yes
>>>  Fort SUBROUTINE BIND(C): yes
>>>Fort TYPE,BIND(C): yes
>>>  Fort T,BIND(C,name="a"): yes
>>> Fort PRIVATE: yes
>>>Fort ABSTRACT: yes
>>>Fort ASYNCHRONOUS: yes
>>>   Fort PROCEDURE: yes
>>>  Fort USE...ONLY: yes
>>>Fort C_FUNLOC: yes
>>>  Fort f08 using wrappers: yes
>>>  Fort MPI_SIZEOF: yes
>>>  C profiling: yes
>>>Fort mpif.h profiling: yes
>>>   Fort use mpi profiling: yes
>>>Fort use mpi_f08 prof: yes
>>>   Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support:
>>> yes,
>>>   OMPI progress: no, Event lib: yes)
>>>Sparse Groups: no
>>>   Internal debug support: no
>>>   MPI interface warnings: yes
>>>  MPI parameter check: runtime
>>> Memory profiling support: no
>>> Memory debugging support: no
>>>   dl support: yes
>>>Heterogeneous support: no
>>>MPI_WTIME support: native
>>>  Symbol vis. support: yes
>>>Host topology support: yes
>>> IPv6 support: yes
>>>   MPI extensions: affinity, cuda, ftmpi, rocm, shortfloat
>>>  Fault Tolerance support: yes
>>>   FT MPI support: yes
>>>   MPI_MAX_PROCESSOR_NAME: 256
>>> MPI_MAX_ERROR_STRING: 256
>>>  MPI_MAX_OBJECT_NAME: 64
>>> MPI_MAX_INFO_KEY: 36
>>> MPI_MAX_INFO_VAL: 256
>>>MPI_MAX_PORT_NAME: 1024
>>>   MPI_MAX_DATAREP_STRING: 128
>>>  MCA accelerator: null (MCA v2.1.0, API