We have had reports of applications running faster when executing under OMPI’s
mpiexec versus when started by srun. Reasons aren’t entirely clear, but are
likely related to differences in mapping/binding options (OMPI provides a very
large range compared to srun) and optimization flags provided
Greeting OpenMPI users and devs!
We use OpenMPI with Slurm as our scheduler, and a user has asked me
this: should they use mpiexec/mpirun or srun to start their MPI jobs
through Slurm?
My inclination is to use mpiexec, since that is the only method that's
(somewhat) defined in the MPI standa
Hi everyone,
I have a cluster of 32 nodes with Infiniband, four of them
additionally have a 10G Mellanox Ethernet card for faster I/O. If my
job based on openmpi 1.10.6 ends up on one of these nodes, it will
crash:
No OpenFabrics connection schemes reported that they were able to be
used on a spe