Dave,
On 7/28/2017 12:54 AM, Dave Love wrote:
Gilles Gouaillardet writes:
Adam,
keep in mind that by default, recent Open MPI bind MPI tasks
- to cores if -np 2
- to NUMA domain otherwise
Not according to ompi_info from the latest release; it says socket.
thanks, i will double check that.
Gilles Gouaillardet writes:
> Adam,
>
> keep in mind that by default, recent Open MPI bind MPI tasks
> - to cores if -np 2
> - to NUMA domain otherwise
Not according to ompi_info from the latest release; it says socket.
> (which is a socket in most cases, unless
> you are running on a Xeon Phi)
Sent: Thursday, July 20, 2017 9:52 AM
To: users@lists.open-mpi.org
Subject: Re: [OMPI users] NUMA interaction with Open MPI
Hello
Mems_allowed_list is what your current cgroup/cpuset allows. It is different
from what mbind/numactl/hwloc/... change.
The former is a root-only restriction that cannot
12 36
> cpubind: 1
> nodebind: 1
> membind: 0 1
>
> Cheers,
> Hristo
>
> -Original Message-
> From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Gilles
> Gouaillardet
> Sent: Monday, July 17, 2017 5:43 AM
> To: Open MPI Users
> Sub
riginal Message-
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Gilles
Gouaillardet
Sent: Monday, July 17, 2017 5:43 AM
To: Open MPI Users
Subject: Re: [OMPI users] NUMA interaction with Open MPI
Adam,
keep in mind that by default, recent Open MPI bind MPI tasks
- to core
Adam,
keep in mind that by default, recent Open MPI bind MPI tasks
- to cores if -np 2
- to NUMA domain otherwise (which is a socket in most cases, unless
you are running on a Xeon Phi)
so unless you specifically asked mpirun to do a binding consistent
with your needs, you might simply try to ask
I'll start with my question upfront: Is there a way to do the equivalent of
telling mpirun to do 'numactl --interleave=all' on the processes that it
runs? Or if I want to control the memory placement of my applications run
through MPI will I need to use libnuma for this? I tried doing "mpirun
nu