[OMPI users] Automatic process mapping close to network card

2024-02-05 Thread Rene Puttin via users
Dear OpenMPI user group, in former OpenMPI release versions I have used a combination of these two options: --mca rmaps_dist_device --map-by dist:span in order to let OpenMPI automatically map the processes close to the network card. This was useful for PingPong benchmarks using only one process

[OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread John Haiducek via users
I'm having problems running programs compiled against the OpenMPI 5.0.1 package provided by homebrew on MacOS (arm) 12.6.1. When running a Fortran test program that simply calls MPI_init followed by MPI_Finalize, I get the following output: $ mpirun -n 2 ./mpi_init_test --

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread George Bosilca via users
OMPI seems unable to create a communication medium between your processes. There are few known issues on OSX, please read https://github.com/open-mpi/ompi/issues/12273 for more info. Can you provide the header of the ompi_info command. What I'm interested on is the part about `Configure command li

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread John Haiducek via users
Thanks, George, that issue you linked certainly looks potentially related. Output from ompi_info: Package: Open MPI brew@Monterey-arm64.local Distribution Open MPI: 5.0.1 Open MPI repo revision: v5.0.1 Open MPI release date: Dec 20, 2023 MPI

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread John Haiducek via users
Adding '--pmixmca ptl_tcp_if_include lo0' to the mpirun argument list seems to fix (or at least work around) the problem. On Mon, Feb 5, 2024 at 1:49 PM John Haiducek wrote: > Thanks, George, that issue you linked certainly looks potentially related. > > Output from ompi_info: > >

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread George Bosilca via users
That would be something @Ralph Castain needs to be looking at as he declared in a previous discussion that `lo` was the default for PMIX and we now have 2 reports stating otherwise. George. On Mon, Feb 5, 2024 at 3:15 PM John Haiducek wrote: > Adding '--pmixmca ptl_tcp_if_include lo0' to the

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread John Hearns via users
Stupid question... Why is it going 'out' to the loopback address? Is shared memory not being used these days? On Mon, Feb 5, 2024, 8:31 PM John Haiducek via users < users@lists.open-mpi.org> wrote: > Adding '--pmixmca ptl_tcp_if_include lo0' to the mpirun argument list > seems to fix (or at least

Re: [OMPI users] Homebrew-installed OpenMPI 5.0.1 can't run a simple test program

2024-02-05 Thread George Bosilca via users
That's not for the MPI communications but for the process management part (PRRTE/PMIX). If forcing the PTL to `lo` worked it mostly indicates that the shared memory in OMPI was able to be set up correctly. George. On Mon, Feb 5, 2024 at 3:47 PM John Hearns wrote: > Stupid question... Why is