Fabian Wein <fabian.w...@fau.de> writes:

> Is this a valid test?
>
>
> /opt/openmpi-1.10.0-gcc/bin/mpiexec -n 4 hostname
> leo
> leo
> leo
> leo

So, unless you turned off the default binding -- to socket? check the
mpirun man page -- it worked, but the "numa" level failed.  I don't know
if that level has to exist, and there have been bugs in that area
before.  Running lstopo might be useful, and checking that you're
picking up the right hwloc dynamic library.

What happens if you try to bind to sockets, assuming you don't want to
bind to cores?  [I don't understand why the default isn't to cores when
you have only one process per core.]

> /opt/openmpi-1.10.0-gcc/bin/mpiexec -bind-to numa -n 4 hostname
> --------------------------------------------------------------------------
> A request was made to bind a process, but at least one node does NOT
> support binding processes to cpus.
>
>   Node:  leo
> This usually is due to not having libnumactl and libnumactl-devel
> installed on the node.

By the way, you can check the binding done, independently to what
openmpi says, with
  mpirun ... hwloc-ps

Reply via email to