Greeting OpenMPI users and devs!

We use OpenMPI with Slurm as our scheduler, and a user has asked me this: should they use mpiexec/mpirun or srun to start their MPI jobs through Slurm?

My inclination is to use mpiexec, since that is the only method that's (somewhat) defined in the MPI standard and therefore the most portable, and the examples in the OpenMPI FAQ use mpirun. However, the Slurm documentation on the schedmd website say to use srun with the --mpi=pmi option. (See links below)

What are the pros/cons of using these two methods, other than the portability issue I already mentioned? Does srun+pmi use a different method to wire up the connections? Some things I read online seem to indicate that. If slurm was built with PMI support, and OpenMPI was built with Slurm support, does it really make any difference?

https://www.open-mpi.org/faq/?category=slurm
https://slurm.schedmd.com/mpi_guide.html#open_mpi


--
Prentice

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to