Re: [slurm-users] Unexpected MPI process distribution with the --exclusive flag

2019-07-31 Thread CB
Thanks for the replies. I didn't specify earlier but we're using Inte MPI and the following environment variable, I_MPI_JOB_RESPECT_PROCESS_PLACEMENT, fixed my issue. #SBATCH --ntasks=980 #SBATCH --ntasks-per-node=16 #SBATCH --exclusive export I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off mpirun -np $

Re: [slurm-users] Unexpected MPI process distribution with the --exclusive flag

2019-07-30 Thread Daniel Letai
On 7/30/19 6:03 PM, Brian Andrus wrote: I think this may be more on how you are calling mpirun and the mapping of processes. With the "--exclusive" option, the processes are given access to all the cores on each box, so mpirun has a choic

Re: [slurm-users] Unexpected MPI process distribution with the --exclusive flag

2019-07-30 Thread Brian Andrus
I think this may be more on how you are calling mpirun and the mapping of processes. With the "--exclusive" option, the processes are given access to all the cores on each box, so mpirun has a choice. IIRC, the default is to pack them by slot, so fill one node, then move to the next. Whereas y

[slurm-users] Unexpected MPI process distribution with the --exclusive flag

2019-07-30 Thread CB
Hi Everyone, I've recently discovered that when an MPI job is submitted with the --exclusive flag, Slurm fills up each node even if the --ntasks-per-node flag is used to set how many MPI processes is scheduled on each node. Without the --exclusive flag, Slurm works fine as expected. Our system i