Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-11 Thread Ralph Castain via users
d that? Thanks. >Ray > > > From: users on behalf of Ralph Castain via > users > Sent: Monday, October 11, 2021 1:49 PM > To: Open MPI Users > Cc: Ralph Castain > Subject: Re: [OMPI users] [External] Re: cpu bi

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-11 Thread Chang Liu via users
Yes, I understand. I use mpirun for my self as I want to have more control. However, when trying to provide our developed code to normal users, I think I should change all the "mpirun" to "srun" in the batch scripts, so that the users will not encounter unexpected behavior, as they use slurm to

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-11 Thread Ralph Castain via users
Oh my - that is a pretty strong statement. It depends on what you are trying to do, and whether or not Slurm offers a mapping pattern that matches. mpirun tends to have a broader range of options, which is why many people use it. It also means that your job script is portable and not locked to a

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-11 Thread Chang Liu via users
OK thank you. Seems that srun is a better option for normal users. Chang On 10/11/21 1:23 PM, Ralph Castain via users wrote: Sorry, your output wasn't clear about cores vs hwthreads. Apparently, your Slurm config is setup to use hwthreads as independent cpus - what you are calling "logical cor

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-11 Thread Ralph Castain via users
Sorry, your output wasn't clear about cores vs hwthreads. Apparently, your Slurm config is setup to use hwthreads as independent cpus - what you are calling "logical cores", which is a little confusing. No, mpirun has no knowledge of what mapping pattern you passed to salloc. We don't have any

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-11 Thread Chang Liu via users
This is not what I need. The cpu can run 4 threads per core, so "--bind-to core" results in one process occupying 4 logical cores. I want one process to occupy 2 logical cores, so two processes sharing a physical core. I guess there is a way to do that by playing with mapping. I just want to

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-11 Thread Ralph Castain via users
You just need to tell mpirun that you want your procs to be bound to cores, not socket (which is the default). Add "--bind-to core" to your mpirun cmd line On Oct 10, 2021, at 11:17 PM, Chang Liu via users mailto:users@lists.open-mpi.org> > wrote: Yes they are. This is an interactive job from

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-10 Thread Chang Liu via users
Yes they are. This is an interactive job from salloc -N 1 --ntasks-per-node=64 --cpus-per-task=2 --gpus-per-node=4 --gpu-mps --time=24:00:00 Chang On 10/11/21 2:09 AM, Åke Sandgren via users wrote: On 10/10/21 5:38 PM, Chang Liu via users wrote: OMPI v4.1.1-85-ga39a051fd8 % srun bash -c

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-10 Thread Åke Sandgren via users
On 10/10/21 5:38 PM, Chang Liu via users wrote: > OMPI v4.1.1-85-ga39a051fd8 > > % srun bash -c "cat /proc/self/status|grep Cpus_allowed_list" > Cpus_allowed_list:  58-59 > Cpus_allowed_list:  106-107 > Cpus_allowed_list:  110-111 > Cpus_allowed_list:  114-115 > Cpus_allowed_lis

Re: [OMPI users] [External] Re: cpu binding of mpirun to follow slurm setting

2021-10-10 Thread Chang Liu via users
OMPI v4.1.1-85-ga39a051fd8 % srun bash -c "cat /proc/self/status|grep Cpus_allowed_list" Cpus_allowed_list: 58-59 Cpus_allowed_list: 106-107 Cpus_allowed_list: 110-111 Cpus_allowed_list: 114-115 Cpus_allowed_list: 16-17 Cpus_allowed_list: 36-37 Cpus_allowed_list: