It's a different code path, that's all - just a question of what path gets traversed.
Would you mind posting a little more info on your two use-cases? For example, do you have a default hostfile telling mpirun what machines to use? On Sep 25, 2019, at 12:41 PM, Martín Morales <martineduardomora...@hotmail.com <mailto:martineduardomora...@hotmail.com> > wrote: Thanks Ralph, but if I have a wrong hostfile path in my MPI_Comm_spawn function, why it works if I run with mpirun (Eg. mpirun -np 1 ./spawnExample)? -------------------------------- De: Ralph Castain <r...@open-mpi.org <mailto:r...@open-mpi.org> > Enviado: miércoles, 25 de septiembre de 2019 15:42 Para: Open MPI Users <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> > Cc: steven.va...@gmail.com <mailto:steven.va...@gmail.com> <steven.va...@gmail.com <mailto:steven.va...@gmail.com> >; Martín Morales <martineduardomora...@hotmail.com <mailto:martineduardomora...@hotmail.com> > Asunto: Re: [OMPI users] Singleton and Spawn Yes, of course it can - however, I believe there is a bug in the add-hostfile code path. We can address that problem far easier than moving to a different interconnect. On Sep 25, 2019, at 11:39 AM, Martín Morales via users <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> > wrote: Thanks Steven. So, actually it can’t spawns from a singleton? -------------------------------- De: users <users-boun...@lists.open-mpi.org <mailto:users-boun...@lists.open-mpi.org> > en nombre de Steven Varga via users <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> > Enviado: miércoles, 25 de septiembre de 2019 14:50 Para: Open MPI Users <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> > Cc: Steven Varga <steven.va...@gmail.com <mailto:steven.va...@gmail.com> > Asunto: Re: [OMPI users] Singleton and Spawn As far as I know you have to wire up the connections among MPI clients, allocate resources etc. PMIx is a library to set up all processes, and shipped with openmpi. The standard HPC method to launch tasks is through job schedulers such as SLURM or GRID Engine. SLURM srun is very similar to mpirun: does the resource allocations, then launches the jobs on allocated nodes and cores, etc. It does this through PMIx library, or mpiexec. When running mpiexec without integrated job manager, you are responsible allocating recourses. See mpirun for details to pass host lists, oversubscription etc. If you are looking for a different, not MPI based interconnect, try ZeroMQ or other Remote Procedure Calls -- it won't be simpler though. Hope it helps: Steve On Wed, Sep 25, 2019, 13:15 Martín Morales via users, <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> > wrote: Hi all! This is my first post. I'm newbie on Open MPI (and on MPI likewise!). I recently build the current version of this fabulous software (v4.0.1) on two Ubuntu 18 machines (a little part of our Beowulf Cluster). I already read (a lot) the FAQ and posts on the mail users list but I cant figure out how can I do this (if it can): I need run my parallel programs without mpirun/exec commands; I need just one process (in my “master” machine) that will spawns processes dynamically (in the “slaves” machines). I already maked some dummies tests scripts and they works fine with mpirun/exec commands. I set in the MPI_Info_set the key “add-hostfile” with the file containing that 2 machines, that I mention before, with 4 slots each one. Nevertheless it doesn't work when I just run like a singleton program (e.g. ./spawnExample): it throws an error like this: “There are not enough slots available in the system to satisfy the 7 slots that were requested by the application:...”. Here I try to start 8 processes on the 2 machines. It seems that one process its executing fine on “master” and when it tries to spawns the other 7 it crashes. We need this execution schema because we already have our software (used for scientific research) and we need to “incorporate” or “embed” Open MPI on it. Thanks in advance guys! _______________________________________________ users mailing list users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> https://lists.open-mpi.org/mailman/listinfo/users