Thanks Ralph
I will do all of that. Much appreciated.
Thanks Gilles
I realise this is “off topic”. I was hoping the Open-MPI ORTE/HNP message might
give me a clue where to look for my driver problem.
Regarding P/Q ratios, indeed P=2 & Q=16 does indeed give me better performance.
Kind regards
Hi
I have generated this problem myself by tweaking the MTU of my 8 node Raspberry
Pi 4 cluster to 9000 bytes, but I would be grateful for any ideas/suggestions
on how to relate the Open-MPI ORTE message to my tweaking.
When I run HPL Linpack using my “improved” cluster, it runs quite happily f
Ralph, John, Prentice
Thank you for your replies.
Indeed, —bind-to none or —bind-to socket solved my problem…
mpirun —bind-to socket -host node1,node2 -x OMP_NUM_THREADS=4 -np 2 xhpl
… happily runs 2 xhpl process, one on each node, with 4 cores fully utilised.
The hints about top/htop/pstree/l
Hi
I’m experimenting with hybrid OpenMPI/OpenMP Linpack benchmarks on my small
cluster, and I’m a bit confused as to how to invoke mpirun.
I have compiled/linked HPL-2.3 with OpenMPI and libopenblas-openmp using the
GCC -fopenmp option on Ubuntu 20.04 64-bit.
With P=1 and Q=1 in HPL.dat, if I
Hi Joseph, JohnThank you for your replies.I’m using Ubuntu 20.04 aarch64 on a 8 x Raspberry Pi 4 cluster.The symptoms I’m experiencing are that the HPL Linpack performance in Gflops increases on a single core as NB is increased from 32 to 256. The theoretical maximum is 6 Gflops per core. I can ach
Hi
I’m trying to investigate an HPL Linpack scaling issue on a single node,
increasing from 1 to 4 cores.
Regarding single node messages, I think I understand that Open-MPI will select
the most efficient mechanism, which in this case I think should be vader shared
memory.
But when I run Linpa
Hi Lana
I’m a Open MPI newbie too, but I managed to build Open MPI 4.0.4 quite easily
on Ubuntu 20.04 just following the instructions in README/INSTALL in the top
level source directory, namely:
mkdir build
cd build
../configure CFLAGS=“-O3” # My CFLAGS
make all
sudo make all
sudo make install
Thank you Jeff and Gilles.
Kind regards
John
Hi
Ubuntu 20.04 aarch64, Open-MPI 4.0.3, HPL 2.3
Having changed the MTU across my small cluster from 1500 to 9000, I’m wondering
how/if Open-MPI can take advantage of this increased maximum packet size.
ip link show eth0
2: eth0: mtu 9000 qdisc mq state UP mode
DEFAULT group default qlen 10
10 matches
Mail list logo