Thanks for the replies.
I didn't specify earlier but we're using Inte MPI and the following
environment variable, I_MPI_JOB_RESPECT_PROCESS_PLACEMENT, fixed my issue.
#SBATCH --ntasks=980
#SBATCH --ntasks-per-node=16
#SBATCH --exclusive
export I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off
mpirun -np $
On 7/30/19 6:03 PM, Brian Andrus wrote:
I think this may be more on how you are calling mpirun and the
mapping of processes.
With the "--exclusive" option, the processes are given access
to all the cores on each box, so mpirun has a choic
I think this may be more on how you are calling mpirun and the mapping
of processes.
With the "--exclusive" option, the processes are given access to all the
cores on each box, so mpirun has a choice. IIRC, the default is to pack
them by slot, so fill one node, then move to the next. Whereas y
Hi Everyone,
I've recently discovered that when an MPI job is submitted with the
--exclusive flag, Slurm fills up each node even if the --ntasks-per-node
flag is used to set how many MPI processes is scheduled on each node.
Without the --exclusive flag, Slurm works fine as expected.
Our system i