* Philip Kovacs [190917 07:43]:
> >> I suspect the question, which I also have, is more like:
> >>
> >> "What difference does it make whether I use 'srun' or 'mpirun' within
> >> a batch file started with 'sbatch'."
>
> One big thing would be that using srun gives you resource tracking
> an
over `mpirun ./my_mpi_program´? For me, both seem to do exactly the
same thing. No? Did I miss something?
no, the issue is whether your mpirun is slurm-aware or not.
you can get exactly the same behavior, if you link with slurm hooks.
the main thing is that slurm communicates the resources for
For my two-cents I would recommend using srun. While mpirun "works" I've
seen strange behavior especially if you are using task affinity and core
binding. Even weirder with hybrid codes that use threads and MPI.
Using srun resolves these issues as it integrates more tightly with the
scheduler
Hi Jürgen,
we set in our modules the variables $MPIEXEC and $FLAGS_MPI_BATCH and
documented these.
This way, by changing the workloadmanagement system or the MPI or
whatsoever does not change the documentation (at least on that point ;) )
Best
Marcus
On 9/17/19 9:02 AM, Juergen Salk wrote:
>For our next cluster we will switch from Moab/Torque to Slurm and have
>to adapt the documentation and example batch scripts for the users.
>Therefore, I wonder if and why we should recommend (or maybe even urge)
>our users to use srun instead of mpirun/mpiexec in their batch scripts
>for MPI j
hi jurgen,
> For our next cluster we will switch from Moab/Torque to Slurm and have
> to adapt the documentation and example batch scripts for the users.
heh, we did that a year ago, and we made (well, fixed the slurm one) a
qsub wrapper to avoid having to document this and retraining our users.
(
* Loris Bennett [190917 07:46]:
> >
> >>But I still don't get the point. Why should I favour `srun
> >>./my_mpi_program´
> >>over `mpirun ./my_mpi_program´? For me, both seem to do exactly the same
> >>thing. No? Did I miss something?
> >
> >>Best regards
> >>Jürgen
> >
> > Running a single job
Philip Kovacs writes:
>>according to https://slurm.schedmd.com/mpi_guide.html I have built
>>Slurm 19.05 with PMIx support enabled and it seems to work for both,
>>OpenMPI and Intel MPI. (I've also set MpiDefault=pmix in slurm.conf.)
>
>>But I still don't get the point. Why should I favour `srun
>according to https://slurm.schedmd.com/mpi_guide.html I have built
>Slurm 19.05 with PMIx support enabled and it seems to work for both,
>OpenMPI and Intel MPI. (I've also set MpiDefault=pmix in slurm.conf.)
>But I still don't get the point. Why should I favour `srun ./my_mpi_program´
>over `mpi
Dear all,
according to https://slurm.schedmd.com/mpi_guide.html I have built
Slurm 19.05 with PMIx support enabled and it seems to work for both,
OpenMPI and Intel MPI. (I've also set MpiDefault=pmix in slurm.conf.)
But I still don't get the point. Why should I favour `srun ./my_mpi_program´
ove
10 matches
Mail list logo