> Am 02.11.2015 um 15:58 schrieb Dave Love <d.l...@liverpool.ac.uk>:
> 
> Fabian Wein <fabian.w...@fau.de> writes:
> 
>>> Am 30.10.2015 um 21:45 schrieb Jeff Squyres (jsquyres) <jsquy...@cisco.com>:
>>> 
>>> Oh, that's an interesting idea: perhaps the "bind to numa" is
>>> failing -- but perhaps "bind to socket" would work.
>>> 
>>> Can you try:
>>> 
>>> /opt/openmpi-1.10.0-gcc/bin/mpiexec -bind-to numa -n 4 hostname
>>> /opt/openmpi-1.10.0-gcc/bin/mpiexec -bind-to socket -n 4 hostname
>>> 
>> Both report the same error. Interestingly -bind-to-socket works
> 
> There's something badly wrong with your build or installation if
> -bind-to socket isn't equivalent to --bind-to-socket (which is the
> default, as you should see if you run hwloc-ps under mpirun).
> 
> I just built 1.10 on Ubuntu 14.04 against the native libhwloc 1.8
> (without libnuma1-dev).  It works as expected on that single-socket
> system and --bind-to numa fails as there's no numanode level.
> 

I got the same problems on a second 14.04 system. I don’t know what I’m doing 
wrong.
I’ll install a fresh Ubunut 14.04 on a standard system, at least -bind-to core 
shall work there.

There is an old OpenFOAM installation which includes and old open-mpi, this 
might 
cause the trouble. I also suspect that sourcing the Intel 2016 compilers 
somehow disturbs. 

I don’t know how to check if hwloc supports numa, sockets, … But if I 
configure 1.11.1 I see on in the configure output. Therefore I build it 
manually.


>> but it does not bring me the performance I expect for the petsc benchmark.
> 
> Without a sane installation it's probably irrelevant, but performance
> relative to what?  Anyhow, why don't you want to bind to cores, or at
> least L2 cache, if that’s shared?

I compare the performance of the petsc stream benchmark with a similar but older
4 packages 24 cores opteron system and there -bind-to numa results in a 
significant
increase in performance.

Anyhow, I finally managed to compile mpich (there were issues with the intel 
compilers) and mpich allows bindings on my system. I still have to find out the 
optimal binding/ mapping, simply binding to numa as in the other system doesn’t 
work but the topology is different. I’m a user and new to MPI, I still have to 
learn a lot. 

Thanks for all your time and thoughts and willing in helping me.

Fabian
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/11/27972.php

Reply via email to