OMPI seems unable to create a communication medium between your processes. There are few known issues on OSX, please read https://github.com/open-mpi/ompi/issues/12273 for more info.
Can you provide the header of the ompi_info command. What I'm interested on is the part about `Configure command line:` George. On Mon, Feb 5, 2024 at 12:18 PM John Haiducek via users < users@lists.open-mpi.org> wrote: > I'm having problems running programs compiled against the OpenMPI 5.0.1 > package provided by homebrew on MacOS (arm) 12.6.1. > > When running a Fortran test program that simply calls MPI_init followed by > MPI_Finalize, I get the following output: > > $ mpirun -n 2 ./mpi_init_test > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > PML add procs failed > --> Returned "Not found" (-13) instead of "Success" (0) > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_mpi_instance_init failed > --> Returned "Not found" (-13) instead of "Success" (0) > -------------------------------------------------------------------------- > [haiducek-lt:00000] *** An error occurred in MPI_Init > [haiducek-lt:00000] *** reported by process [1905590273,1] > [haiducek-lt:00000] *** on a NULL communicator > [haiducek-lt:00000] *** Unknown error > [haiducek-lt:00000] *** MPI_ERRORS_ARE_FATAL (processes in this > communicator will now abort, > [haiducek-lt:00000] *** and MPI will try to terminate your MPI job as > well) > -------------------------------------------------------------------------- > prterun detected that one or more processes exited with non-zero status, > thus causing the job to be terminated. The first process to do so was: > > Process name: [prterun-haiducek-lt-15584@1,1] Exit code: 14 > -------------------------------------------------------------------------- > > I'm not sure whether this is the result of a bug in OpenMPI, in the > homebrew package, or a misconfiguration of my system. Any suggestions for > troubleshooting this? >