Dave,

On 7/28/2017 12:54 AM, Dave Love wrote:
Gilles Gouaillardet <gilles.gouaillar...@gmail.com> writes:

Adam,

keep in mind that by default, recent Open MPI bind MPI tasks
- to cores if -np 2
- to NUMA domain otherwise
Not according to ompi_info from the latest release; it says socket.
thanks, i will double check that.
i made a simple test on KNL in SNC4 mode (1 socket, 4 numa nodes) and with 4 mpi tasks, the binding is per NUMA domain. that suggests the ompi_info output is bogus, and i will double check which released versions are impacted too
(which is a socket in most cases, unless
you are running on a Xeon Phi)
[There have been multiple nodes/socket on x86 since Magny Cours, and
it's also relevant for POWER.  That's a reason things had to switch to
hwloc from whatever the predecessor was called.]

so unless you specifically asked mpirun to do a binding consistent
with your needs, you might simply try to ask no binding at all
mpirun --bind-to none ...
Why would you want to turn off core binding?  The resource manager is
likely to supply a binding anyhow if incomplete nodes are allocated.
unless i overlooked it, the initial post did not mention any resource manager. to me, the behavior (better performance with interleaved memory) suggests that the application will perform better if a single MPI tasks has its threads using all the available sockets. so unless such a binding is requested (either via mpirun or the resource manager), i suggested no binding at all could lead to better performance than
a default binding to socket/numa.

Cheers,

Gilles

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to