Ralph, John, Prentice
Thank you for your replies.
Indeed, —bind-to none or —bind-to socket solved my problem…
mpirun —bind-to socket -host node1,node2 -x OMP_NUM_THREADS=4 -np 2 xhpl
… happily runs 2 xhpl process, one on each node, with 4 cores fully utilised.
The hints about top/htop/pstree/l
ps -eaf --forest or indeed pstree is a good way to see what is going on.
Also 'htop' is a very useful utility.
Also well worth running 'lstopo' to look at the layout of cores nd caches
on your machines.
On Mon, 3 Aug 2020 at 09:40, John Duffy via users
wrote:
> Hi
>
> I’m experimenting with hy
Be default, OMPI will bind your procs to a single core. You probably want to at
least bind to socket (for NUMA reasons), or not bind at all if you want to use
all the cores on the node.
So either add "--bind-to socket" or "--bind-to none" to your cmd line.
On Aug 3, 2020, at 1:33 AM, John Duff
Hi
I’m experimenting with hybrid OpenMPI/OpenMP Linpack benchmarks on my small
cluster, and I’m a bit confused as to how to invoke mpirun.
I have compiled/linked HPL-2.3 with OpenMPI and libopenblas-openmp using the
GCC -fopenmp option on Ubuntu 20.04 64-bit.
With P=1 and Q=1 in HPL.dat, if I