Fabian Wein <fabian.w...@fau.de> writes:

>> Am 30.10.2015 um 21:45 schrieb Jeff Squyres (jsquyres) <jsquy...@cisco.com>:
>> 
>> Oh, that's an interesting idea: perhaps the "bind to numa" is
>> failing -- but perhaps "bind to socket" would work.
>> 
>> Can you try:
>> 
>> /opt/openmpi-1.10.0-gcc/bin/mpiexec -bind-to numa -n 4 hostname
>> /opt/openmpi-1.10.0-gcc/bin/mpiexec -bind-to socket -n 4 hostname
>> 
> Both report the same error. Interestingly -bind-to-socket works

There's something badly wrong with your build or installation if
-bind-to socket isn't equivalent to --bind-to-socket (which is the
default, as you should see if you run hwloc-ps under mpirun).

I just built 1.10 on Ubuntu 14.04 against the native libhwloc 1.8
(without libnuma1-dev).  It works as expected on that single-socket
system and --bind-to numa fails as there's no numanode level.

> but it does not bring me the performance I expect for the petsc benchmark.

Without a sane installation it's probably irrelevant, but performance
relative to what?  Anyhow, why don't you want to bind to cores, or at
least L2 cache, if that's shared?

Reply via email to