As far as I know you have to wire up the connections among MPI clients, allocate resources etc. PMIx is a library to set up all processes, and shipped with openmpi.
The standard HPC method to launch tasks is through job schedulers such as SLURM or GRID Engine. SLURM srun is very similar to mpirun: does the resource allocations, then launches the jobs on allocated nodes and cores, etc. It does this through PMIx library, or mpiexec. When running mpiexec without integrated job manager, you are responsible allocating recourses. See mpirun for details to pass host lists, oversubscription etc. If you are looking for a different, not MPI based interconnect, try ZeroMQ or other Remote Procedure Calls -- it won't be simpler though. Hope it helps: Steve On Wed, Sep 25, 2019, 13:15 Martín Morales via users, < users@lists.open-mpi.org> wrote: > Hi all! This is my first post. I'm newbie on Open MPI (and on MPI > likewise!). I recently build the current version of this fabulous software > (v4.0.1) on two Ubuntu 18 machines (a little part of our Beowulf Cluster). > I already read (a lot) the FAQ and posts on the mail users list but I cant > figure out how can I do this (if it can): I need run my parallel programs > without mpirun/exec commands; I need just one process (in my “master” > machine) that will spawns processes dynamically (in the “slaves” machines). > I already maked some dummies tests scripts and they works fine with > mpirun/exec commands. I set in the MPI_Info_set the key “add-hostfile” > with the file containing that 2 machines, that I mention before, with 4 > slots each one. Nevertheless it doesn't work when I just run like a > singleton program (e.g. ./spawnExample): it throws an error like this: > “There are not enough slots available in the system to satisfy the 7 slots > that were requested by the application:...”. Here I try to start 8 > processes on the 2 machines. It seems that one process its executing fine > on “master” and when it tries to spawns the other 7 it crashes. > We need this execution schema because we already have our software (used > for scientific research) and we need to “incorporate” or “embed” Open MPI > on it. > Thanks in advance guys! > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users